I remember when I was a young assembly programmer and I looked down on those obtuse C programmers who needed such a slow high-level language. After high-school, having read all those fantastic W. Richard Stevens books I simply could not understand why anybody on earth wanted to use C++ when C was easily capable of modeling complex object structures. When they forced us to use Java, I was deeply in shock: was this really a programming language or more a toy for little kids and why would they want to slow down computers to a crawl?
I started my journey with trying to learn C++ in the late 1990s when it was the “cool” thing. Most of the books and the sparse educational material on the web back then made little sense to me. I wanted to instruct a computer to do cool things like the games I played or the programs I used, but all I got was obtuse passages about taxonomies of animals and shapes (CAnimal, CDog and CShape, CSquare). Stuff like virtual functions and constructor types was deeply uninteresting and irrelevant for me at that point.
Then came the “aha” moment. I downloaded a few tutorials which used the GLUT library from C and I was instantly amused. Now I could draw 3D teapots and text and rectangles with almost no boilerplate and I had examples to learn from. A turning event that has influenced all aspects of my life for the latter 20 years.
I had the same experience trying to learn C++ as a kid, the book i got had all sorts of OO examples and all i could get is “why would I want to do this?” and my progress learning to program was put back by years.
I feel the same. Coming of age in the late 1990s OOP was all the rage and while I found a lot of it interesting and useful, it created a lot of its own problems and the solution was often just "more oop" in the form of design patterns and such. Meanwhile I always preferred composition over inheritance and templates made a lot more sense to me than crazy object hierarchies ever did.
I almost actually considered leaving programming in the mid 2000s but then I started seeing the tide slowly turn away from OOP, or at least I found voices that agreed that OOP is not the only answer and I felt a sense of relief. Then as latency became a more and more important thing in algo trading, it eventually fell out of favor almost entirely.
But yeah, OOP has a place but it really went overboard for awhile and there was jiggawatts of brainpower spent on trying to make it work when straight C or other approaches would have worked much better- and with much more grokable code!
Mine was more extreme... I basically stopped learning to program after hello world and simple math because there was all of this OO stuff and it was like "ok I can follow these examples but have no idea what any of this would possibly be for" and that was that for years.
My first solo experience was a VB.NET book with Car, SuperCar, Truck and all this stuff. Even at the time (I was very young) those laughably shallow examples stuck out as contrived and useless
Interesting, I've been learning and doing almost everything with a functional or procedural style, but I've felt kind of guilty I don't understand OOP patterns enough, or don't know when I would want to use them.
So I've been learning Smalltalk/Pharo and reading Smalltalk, Objects, and Design [1] because people say that's what OOP was really supposed to be. It's been interesting and enlightening in some ways, but I still feel like I'd rather do most things without OOP. Do you think it's still helpful or worth it to dig into all this for someone now?
It's not very long. Why not just read it and work through it? It isn't written like a recipe book, but even if it was, you should sit down and read your recipe books
I got a similar book on C++ and tried to return it to get a refund as soon as found there's no exiting sample code but OO stuff. I was lucky that the shop didn't get the book back so I had to read it. I think it upgraded me.
Couple of year out of college and into my first engineering job, I went to work on a job to write code for the 68000-based controller of a signal processing system. It had been decided that we were to use this new language called C, which no one on the team was experienced with. I was sent to Houston to attend a one week class on C with the expectation that I could mentor the rest of the team. They were having to learn it from K&R. We used a new compiler from Green Hills, which was quite buggy, to compile our fragile code to run on a CPU our hardware colleagues had built that had its own bugs. We had a couple of big ass in-circuit emulators on which to do our debugging. They had quirks of their own, as well. Somehow, we were able to get it working quite well. As big of a challenge as it was, we had an absolute glorious time!
What can I tell you,
We develop hardware and software using IFX microcontrollers, (HW still has bugs) compiling our code using GreenHills / GCC / HighTech compilers (they all still have bugs), debugging and testing equipment has its own quirks and bugs.
Oh and our software for sure is ripe with detected and undetected bugs as well.
(Which is not to say anything good about Java. The key to Java was elucidated by Mark Dominus: "I enjoyed programming in Java, and being relieved of the responsibility for producing a quality product.")
Java's secret sauce was that it liberated a whole generation of Microsoft sharecroppers. Microsoft has since clawed some back via its C# marketing, but cannot dictate a new "framework" every second year as it once did.
Java's actual secret sauce was that it was mostly like C/C++, but with no unsafe memory accesses. It was essentially the Rust of the late 1990s, only a whole lot slower and clunkier than real Rust. There was a lot of enterprise software being rewritten from C++ to Java as a result.
Nobody at the time gave a hoot about "unsafe memory accesses". That is a wholly recent conceit. Nobody then cared about security. Sendmail was how e-mail was delivered, and was, believe it, widely admired.
Java was just enough like the C++ of 1990 to compete, but with garbage collection and a big library, and without confusing pointers, so lower-skilled programmers could use it. That is all. Computers were literally thousands of times slower than today. Java was considered just barely fast enough.
But without freeing programmers from the Microsoft frameworks treadmill, it would have sunk without a trace.
> Nobody at the time gave a hoot about "unsafe memory accesses".
No, that was much of the point. Memory leaks, pointer overruns, use after free bugs were a huge time sink. In addition to portability, industry wanted a language without these problems.
So here's an interesting story of how I became a C programmer. Around 1990-1991, I was deeply into TinyMUD and similar games. An acquaintance had forked TinyMUCK and developed it into a programmable system using a derivative of Forth. Here's the catch: up until this point, development had been done on 4.2BSD VAX, and dereferencing NULL pointers was no big deal there.
I was tasked with porting to SunOS 4, where if you deref NULL you crash with SEGV. Therefore I had to get real good at GDB real fast and chase every instance of NULL usage, where I'd put in an initial check first. I did my job alright and before long, our server code was running smoothly cross-platform, and I'm fairly sure that my patches had far-reaching effects beyond the SunOS platform I'd developed on.
My colleagues and I were at the forefront of a wave of new Internet users in the early-to-mid 90s, and the embedded programming language environment of a simple text "adventure game" was conducive to many people learning how to program in a simple and forgiving environment. Those of us who hacked clients and servers were a somewhat elite vanguard; I decided to specialize in systems administration and had an interesting career soon afterwards.
And it all started because some VAX programmer had thought it was no big deal to dereference NULL pointers in a pervasive way throughout the ANSI C codebase of a game server.
Classic old school problem would be your bug overwrites a pointer with garbage. And then whatever that points to, like OS code or data, gets trashed next. Your computer starts acting 'weird'. Worse changing random unrelated things would make the problem go away.
according to James gosling (the creator of Java) in a interview from a year or two ago with lex Friedman he said he wanted a C++-like language but without its pitfalls: pointers bugs and bad concurremcy/synchronization primitives. This was to bring enterprise programmers (who mainly used c++ at that point) to the JVM ecosystem. He expected an explosion of jvm languages after that which, sadly, didn't happen. I think there were more pitfalls which I don't remember now.
Maybe he is saying bull now to look more cool, but he sounded rather convincing in the interview. I'd recommend to listen to it
That may be why it was designed, but GP was talking about why it was adopted. Those are such totally different things that it's not worth comparing them. Lisp?
Gosling's chief skill always was to sound convincing. (Lord knows language design wasn't it.) The contempt he had for Java coders is hard to miss in writings from the time.
For me it was no more malloc and free for every stupid little thing inside every function I had to write (string processing, arrays, etc.) But I was already using Perl for web development (no malloc no free) and I wasn't overly interested in Java. I made some money out of it later on, probably none past 2012. Then always languages with garbage collection, possibly interpreted. I don't like to have to compile and build to deploy.
In essence all you have for objects in Java is pointers.
For the primitive types, like long, you can't get a null pointer exception, because there really is no pointer, but for any object type it's actually a pointer, the object itself lives on the heap and you're given a pointer to it, if the object is null, that's a null pointer. They don't feel much like pointers from a language like C because you're not provided with pointer arithmetic - you can't try to add my_object + 16 as you could in C - and because Java was a modern language which knows what you mean when you write foo.bar, unlike C and C++ which expect you to remember whether foo is a pointer and write foo->bar so that the poor compiler needn't figure it out.
For modern Java the compiler does escape analysis and may conclude an object cannot "escape" in which case it may be created as part of the stack frame of the code which uses it instead of on the heap, but it's still basically a pointer.
This is all rather awkward, for example Java's 64-bit double precision floating point number is a primitive, always 8 bytes on your stack no need for a pointer to anything - but if you want your custom four 16-bit integers type (maybe representing RGBA) that's an Object so it is treated differently even though it's also just 8 bytes. C gets this part right, your custom types (struct, and to a lesser extent enum and union) aren't treated so much worse than the language built-in types.
Anyway, it's Memory Safe because the null pointer exception is essentially the same behaviour as if you try to unwrap() a None in Rust, the JVM isn't going to let you just "press on" as you might in C, you've got a programming error and must either recover from that or your program aborts.
longtime C/C++ guy who became a longtime Java guy:
the most time I've ever spent dealing with invalid addresses and memory leaks, in production /enterprise code, has been in Java, not C or C++
my personal theory is because despite how much safer Java was by design, culturally, beginning in the early 2000s, it also opened the floodgates to a big wave of lower caliber programmers "just doing it for steady jobs" and so a "99% right? ship it! someone will file a ticket next week if needed" mentality was more common
not the fault or credit of the langs, just the type of people they attracted, at large scale
Java: "I'm super friendly! Just click here!"
C: "Here's a razor blade. Here's a razor blade. Another. Another. Now assemble to build a maze. Also the maze is invisible. Oh and our manual is 50 pages."
Most other programmers probably find your code incomprehensible, if it was Perl :^)
But seriously, is there any modern language other than Perl where regular expressions feel so… natural? I reach out for Perl less and less, but always sigh when I need to handle any regex in Python.
The best thing Java did (along with Apache) was make people take open-source seriously. There was a time circa the early 2000's when you either did Microsoft closed-source, or you did Java, with not much else out there.
Well, to be fair, C compilers produced significantly slower code back then. And when C++ came around, code that made heavy use of templates was dog slow until compiler technology caught up and could reliably inline templates. And Java was slow before it got generational GC and JIT.
That makes me wonder Ruby is the language I'll be stuck with for another 20 years and just be content with a balance between performance of servers and my productivity...
Another 20 years from now, I highly doubt you'd have to even care as much about performance of servers as you do now stemming from the use of a language that keeps you productive.
Remember, hardware is getting more powerful and cheaper.
The "letters to the editor" are really fun to read -- it's like reading the HN comments of the time, with the same combination of insight and cynicism.
> In response to Gregg Williams' editorial ("The New Generation of Human- Engineered Software," April, p. 6), the mouse of Lisa, Visi On, and their predecessor, the Xerox Star, is a truly fascinating hardware device, and on those few occasions that I have seen these devices in use, I have been impressed. But the mouse is not revolutionary, and, as its name suggests, it is really nothing more than a rodent. Its functional predecessor was the light pen. Some years ago, light pens were fashionable devices for selecting a particular function, and they are still in use. But displays attaching light pens had to have an appropriate phosphor, and they were not as easy to program as function keys. About the same time, touch- sensitive screens were introduced, and they are still used in applications such as online catalogs in libraries; here, too, however, programming appears to be the chief stumbling block.
> If the name of the game is "ease of use," the industry would be far wiser to develop touch-sensitive displays than mice. Because a display has no moving parts, it is likely to prove more durable than a mouse. And a finger placed on a display screen does not require additional desk space, as a mouse does. If an executive were having an office conference, don't you think he might rather touch his screen a couple of times than roll a mouse around his desk pressing buttons on it?
> There are, obviously, many considerations at work in the development of new products. My bet, simply stated, is that the mouse is not a viable product. At best, it will limp along like bubble memory. - John P. Rash, President, Acorn Data Ltd.
There's also:
- Several refutations to a previous letter which claimed CRT monitors are a radiation hazard
- An engineer from Intel responding to criticisms of their FORTRAN compiler with "those issues have been fixed in the latest version"
- Debate over structured programming and strong typing
- Debate on what "computer literacy" means and whether the general population needs it (including a claim that WYSYWIG editors and desktop file managers do not make computers easier to use because "a desktop manager is only a sophisticated analog for being able to copy one file into another")
- Conversations on software prices and piracy
- And, of course, someone pointing out a typo in an article.
The more things change, the more they stay the same. I'm not even close to old but as I get more experienced I continue to see "what's old is new again" ring true.
All things are full of weariness; a man cannot utter it; the eye is not satisfied with seeing, nor the ear filled with hearing. What has been is what will be, and what has been done is what will be done; and there is nothing new under the sun. Is there a thing of which it is said, "See, this is new"? It has been already, in the ages before us. There is no remembrance of former things, nor will there be any remembrance of later things yet to happen among those who come after.
Indeed. It's like we are always reinventing the wheel on a very cyclic manner. We go round and round to come back to the same abstractions.
I had to Google the quote, but it goes like "those who don't learn from the past are bound to repeat it". I am not sure if there's a way around it or some level or repetition is obligatory for things to move forward.
Maybe it's like the saying that you can't really give out advice to someone, because your advice also comes with all your previous baggage. But on the other hand, that's how we learn and transmit our knowledge across generations right?
I think there's some sort of true in all those "feelings" but we also can't deny there are also incremental change that persist over time... Hard to say.
The things that made C successful are rarely recognized even by its fans, but much moreso by its critics and competitors. Things it is most criticized for are among them. They have made C hard to unseat. Too-easy conversion between arrays and pointers is one such criticism, but such conversion was essential to its success.
C++ succeeded not just because it can call C libraries -- most languages can -- but because it retained every advantage C has, even those that few recognize. C++'s STL is a success because it built on what made C a success, deliberately. Alex Stepanov, anyway, understood. Most STL components are really just examples.
Today we are locked in battle against C's weaknesses, particularly in how easy it is to exploit C programs. We lose that battle when new languages leave behind what made C successful. Too-easy accidental conversions bad; deliberate conversions possible, good.
C got various other things subtly right, too: manifestly enough to make up for its blatant failings. If you would displace C, it is much more important to retain its strengths than to fix its flaws. You can do that without understanding by copying. Retaining strengths while leaving behind flaws requires understanding, which has proven too hard for most.
> C got various other things subtly right, too: manifestly enough to make up for its blatant failings. If you would displace C, it is much more important to retain its strengths than to fix its flaws.
I agree. I wonder how feasible it is to separate the two, though. Not because it seems as if there is some unavoidable trade-off to be made (if that was it then somebody would have found a relatively crisp definition of said trade-off). I suspect that it's best understood as an emergent phenomenon.
Consider the LINUX KERNEL MEMORY BARRIERS readme [1], which states very clearly: "Nevertheless, even this memory model should be viewed as the collective opinion of its maintainers rather than as an infallible oracle". And yet some people persist with the belief that such an Oracle must really be possible. Oracles are abstract concepts.
C doesn't persist despite its contradictions. It persists because of them.
I'm not claiming that this is good or bad. Just that it's the simplest explanation that I can think of.
Stepanov used a lot of APL, Lisp and Smalltalk inspiration to create the STL, originally implemented in Ada 83 generics, and then thanks to Bjarne ported it to C++.
There is a quite interesting session from talk done by him at Adobe, where he mentions his inspirations.
You misrepresent history: Stepanov implemented STL first in Lisp, then in Ada. Both were manifestly inadequate. Stepanov openly despised "OO gook", and lamented that "begin", "end", and operators ++ and * had to be class members until partial specialization finally made that unnecessary. (Now we have std::begin and end.) Partial specialization was added to C++ specifically to make STL more practical.
C++ proved adequate. But C++ compilers of the time were not; all of them needed massive improvement to usefully build programs that used STL. (It took many years for Microsoft to get there; STL implementations obliged to work under Microsoft compilers were badly crippled until after C++11 came out.) But no other language was up to the job.
Bjarne did not motivate porting it to C++; the language did. Bjarne helped, but Andrew Koenig might have helped more.
Too-easy conversion between arrays and pointers is one such criticism, but such conversion was essential to its success.
Could you elaborate on this point? Walter Bright famously wrote [1] that C's biggest mistake is that it implicitly degrades arrays to pointers when you pass them as arguments to functions. Do you have a rebuttal to his piece? I am not a C expert so I honestly don't know what could be wrong with his proposal.
I remember learning C on my dad's computer running CP/M. My dad spent $4,000 on his computer ($16,800 today; ThinkerToys/CompuPro S-100 bus system) and it was his pride and joy. We spent hours working on it together. Unlike most computers at the time, it had two 8" disk drives, which was awesome, because the C compiler and all the libraries took up most of an 800k disk which meant you needed another disk for your editor and storing your source. At the time, most of the coding that we did together was in assembly, and C seemed awfully bloated and slow. I never really got into it. It's funny thinking about it now because many people seem to look at C as a low level language, but at the time it sure didn't feel that way.
(note: the articles sedatk mentioned didn't seem to be in the table of contents, but on PDF page 113 there is a "special report" section where I've found some of them)
Literally the only thing DOS needed to do was start up a program. Differences between 1 and 2, or any of them, are marginal. That is why windos didn't need to be any good: all it needed was to move the mouse pointer and let you click on a program icon.
- Big difference in philosophy. CP/M is record-based, you work with big tables describing files. Dos 2.0 prefers streams, and it has much more lightweight, Unix-like file handles.
- Another Unix feature adopted is device files. There was no line printer or console file in DOS 1.0, if you wanted to do something unique with I/O you had to write a program for it. That along with pipes makes 2.0 much more nice to work with even 40 years later.
- Hierarchical directories and hard drives. The two go together, and the advantages should be obvious.
- Memory management. Dos 2.0 is the first to have equivalents of malloc, free, and realloc in its syscalls. This makes having Terminate and Stay Resident (TSR) programs much more feasible. The PRINT command is an example, it can run in the background.
If you bypassed DOS's memory allocation, your program would stomp on TSRs, or if it was a TSR then other programs would stomp on it, so either way it would be super unreliable.
The only thing I remember ever bypassing DOS for filesystem access was things like Norton DiskEdit, unerase utilities, and defraggers. Do you have examples in mind?
(Of course lots of programs didn't use standard output.)
I agree that without hard drives you didn't really need subdirectories. It's hard to fit more than a couple of screenfuls of files on a single floppy.
The comments on here make me realize how fortunate I was to have a PC and an interest in learning C as a kid. Not to mention a supportive mother who bought me Borland's Turbo C++ compiler when I was ten or so.
Thanks, Mom. You opened the door to a lifelong passion and a decent livelihood to boot.
I dont think enough people realize the impact of cheap but powerful tools in the 1980s. BYTE ads are selling compilers for 5000 bucks and then Turbo Pascal and Zortech C showed up for 49.99. Incredible.
Amazing to get perspective on what made C fresh and exciting - I only have known the basic narrative "Unix used C so it got super popular", but this article really filled in the info.
Also I could imagine what it was like living through this time, schematics for modems, cheesy ads for EEPROM writers, what a wonderful magazine, I would have been waiting by the door for this to be delivered each month.
From 1988-1991 I lived in a hut in a small African mountain village. No running water or electricity, but I wrote programs on paper and waited with bated breath for every issue of Byte (several months delayed of course). I would read them from cover to cover a dozen times, perusing every article, every advertisement, every editorial. I remember a listing that created a fern using a short, recursive PostScript program.
About once a month I would journey down to the capital, where -- in exchange for my IT services -- they let me tinker to my heart's content and type in my programs (the only language available, and the only one I knew at the time was GW-BASIC).
I found the original "C Programming Language" by Kernighan and Ritchie ("K&R") at my university bookstore in, maybe, 1980. I read it cover to cover twice on a plane flight, memorizing it, and immediately recognized its superiority over Pascal as it existed then. Everyone who made a Pascal extended it, incompatibly with everybody else, to try to match C. Any of those would have sufficed, but none could win.
I kept a journal (paper, of course), but in spite of my good intentions I have never digitized it. It was another time, when letter to Canada and back took a month each way (the final 15k on horseback). Now, I hear that there is a cell tower in the village.
The country, Lesotho, is unique in many ways. It is completely within South Africa, and 2/3 of the men used to go to the South African mines to work. I'm not sure about now, but they used to have one of the highest HIV infection rates in the world (35% or so).
There is also a ski resort (https://afriski.net/). The white colonizers didn't want the land because it was so mountainous -- the lowest point in the country is 1,400m above sea level -- and was not suitable for agriculture, so they let the Africans stay there.
Damn, now you have me dreaming about OCRing my journals!
I know someone who was a grad student at that time, and how excited he was to get a 2400 baud modem and his own VT-100 for work on the VAX.
That grad student was me, though today it seems almost impossible to remember what it was like back then. I do remember ruining a colleague's line-by-line source printout by sending a console message that read, "Andropov just died."
> Operating systems have to deal with some very unusual objects and events: interrupts; memory maps; apparent locations in memory that really represent devices, hardware traps and faults; and I/O controllers. It is unlikely that even a low-level model can adequately support all of these notions or new ones that come along in the future. So a key idea in C is that the language model be flexible; with escape hatches to allow the programmer to do the right thing, even if the language designer didn't think of it first.
This. This is the difference between C and Pascal. This is why C won and Pascal lost - because Pascal prohibited everything but what Wirth thought should be allowed, and Wirth had far too limited a vision of what people might need to do. Ritchie, in contrast, knew he wasn't smart enough to play that game, so he didn't try. As a result, in practice C was considerably more usable than Pascal. The closer you were to the metal, the greater C's advantage. And in those days, you were often pretty close to the metal...
Later, on page 60:
> Much of the C model relies on the programmer always being right, so the task of the language is to make it easy what is necessary... The converse model, which is the basis of Pascal and Ada, is that the programmer is often wrong, so the language should make it hard to say anything incorrect... Finally, the large amount of freedom provided in the language means that you can make truly spectacular errors, far exceeding the relatively trivial difficulties you encounter misusing, say, BASIC.
Also true. And it is true that the "Pascal model" of the programmer has quite a bit of truth to it. But programmers collectively chose freedom over restrictions, even restrictions that were intended to be for their own good.
I was programming in Z-80 assembler in that timeframe, and was pretty excited by the idea of C. I had previously thought to myself that it would be possible to create a language that simplified the tedious, repetitive tasks in assembler but that didn't add bloat or take away byte-level control like BASIC did.
I was an avid reader of Byte and 80-Micro back then.
For various reasons, I focused on other studies for a few years after that, and didn't immediately learn C until after I went to college. I'm sure if I had learned C in '83, I would have had an entirely different career trajectory.
I was fortunate enough to start learning C, in 1983, using one of the compilers reviewed in that issue. BDS C for CPM. Learning C definitely paid off. Here it is nearly 40 years later, and I'm still using C for embedded development.
I use C for embedded devices too, although I didn't end up learning C until '88 or '89. At that point, it was on IBM PCs and VAXen instead of my trusty TRS-80.
I think the amount of technology made those magazines so much more exciting than today's computer magazines which are almost always about computers as black boxes or marketing nonsense or pure software with no hardware applications.
Back then, I used to dream about buying an eprom programmer!
It is still around if you look, new Single Board Computers, dev boards, kits for 'retro' computers. It's still just as big as it ever was, the difference is the mainstream is WAY bigger, just like Apple and Microsoft has sabotaged DIY and left that behind and makes money on hiding the DIY behind their convenience gardens. Check out adafruit or digikey or element14 or keysight or any of the gamejams and hackathons. Makerspaces. PICO-8. Circuit Cellar, Nuts & Volts magazines. It was a HOBBY back then, and the hobby still exists just like there are ham radio clubs, but the industry it spawned has largely overshadowed it. Life is what you make of it, put in the effort if you really miss it, you'll find good company!
Once you start buying easy-to-program microcontroller gadgets from Crowd Supply and Adafruit Industries, it is hard to stop. Nowadays they all have bluetooth built in, so can be controlled wholly from your phone; no buttons, display, or even USB connector needed.
Page 50: "A compromise between assemblers and high-level languages, C helps programmers avoid the idiosyncrasies of particular machines".
Interesting. I thought C appeared as a (very) high level language back in 1983, when development on microcomputers was still mostly done in assembly. This article was published August 1983 and Turbo Pascal v1.0 was only released in November, so I'm not sure what high level languages were available on microcomputers back then, besides BASIC.
BASIC and (UCSD) Pascal were the mainstream choices on 8-bit micros... with BASIC being far and away the dominant language in the amateur and low-volume professional market mainly due to ease of use and that it came bundled with every 8-bit micro I recall using. On 16-bit micros that was around the time more powerful high-level languages started to become available (for example, XLISP was released in 1983. AmigaBASIC released in 1985 was quite powerful for its time). So you are correct that options were limited in '83 mainly because 8-bit micros were very, very storage (RAM and disk) constrained.
It was commercial, student and hard core amateur, developers who developed in assembly in the 80's. C was only ever 'high level' when compared to assembly/machine code. Manual memory management was an indicator that it wasn't high level at all. That said, much commercial software was still written in assembly back then as that was the only way to wring the performance out of an 8-bit and even early 16-bit micros. It was the transition to 16-bit, when all that 6502/6800 code became obsolete when C really started to take over.
In 1983 I programmed in both assembly and in C - and usually flipped back and forth between the two by using the assembly output of the C compiler. My experience was that C was a low-level language in that you could fairly easily see how that C was transformed into the assembly. It was a great way, actually, to learn assembly.
I bet you would have killed for Compiler Explorer (godbolt.org) in the '80s. Actually, I wish I could integrate it directly into my editor today. (Well, preferably something running locally rather than a web service.)
> Apple Logo for the Apple II Plus and Apple Logo Writer for the Apple IIe, developed by Logo Computer Systems, Inc. (LCSI), were the most broadly used and prevalent early implementations of Logo that peaked in the early to mid-1980s.
> Aquarius LOGO was released in 1982 on cartridge by Mattel for the Aquarius home computer.
> Atari Logo was released on cartridge by Atari for the Atari 8-bit family.
> Color Logo was released in 1983 on cartridge (26-2722) and disk (26-2721) by Tandy for the TRS-80 Color Computer.
> Commodore Logo was released, with the subtitle "A Language for Learning", by Commodore Electronics. It was based on MIT Logo and enhanced by Terrapin, Inc. The Commodore 64 version (C64105) was released on diskette in 1983; the Plus/4 version (T263001) was released on cartridge in 1984.[9][10]
Are you Walter Bright, the creator of D? If yes, I was thinking of writing a proposal for WG14 some time in the future regarding slices/fat pointers. Would it be ok if I modelled it after the extension found in the betterC compiler, at least when it comes to syntax?
Feel free to model your work on this and/or on D as you see fit. It'd be great if you made it into an official proposal. That one thing will greatly benefit C programs, much more than any of the other improvements in the C Standard I've seen over the years.
Note that we are already looking into this. I had some proposals for C23 on how to improve arrays in C, but I could not finish this in time. But help is welcome.
Arguably the most important commercial applications on the IBM PC were written in assembly: Lotus 1-2-3 and WordPerfect.
It was a competitive advantage early in the 1980s, then turned into a handicap by the end of the decade when the performance and memory tricks didn’t matter as much as graphics and GUI.
True: people rolled their eyes at anybody trying to field a commercially successful product not coded in assembly. Languages were for proofs-of-concept, and for toys. And yes, that changed as 1990 approached.
Many ads at the time still used a strategy of "reasoning with the reader": explaining the technical benefits of the product, and how using it would make your life better in some way.
Sometime around the 90s, most advertising gradually shifted to emotional manipulation, which is empirically more effective at scale. The famous iPod ads, for example, said nothing at all about the iPod's technical merits, or how you'd benefit by using an iPod instead of other MP3 players. They just showed some vaguely cool-looking person listening to an iPod.
The "I'm a Mac, I'm a PC" ads depicted PCs as old, uncool dorks; while the Mac is fun and interesting and young. Not a word about any actual features or benefits of the Mac. Purely associative emotional branding.
This actually does work in terms of "selling more iPods at scale," though it is dissatisfying to that small segment of the population that cares about making informed and rational decisions. Most HN readers fall into this category, though there aren't enough of us in the world to carry mass advertising strategies.
> Sometime around the 90s, most advertising gradually shifted to emotional manipulation...
Yup.
> ... which is empirically more effective at scale.
I don't think that's the issue. Back in the 80s, stuff came out regularly that was significantly better than the predecessor (if there was one). Emotional manipulation took over when new versions no longer had significant technical advantages over existing competitors.
Right IPhones became a fashion accessory. I guess partial reason is that when everybody uses a smartphone you have to appeal to the masses, and pure reason doesn't reach them.
The same goes for the high fashion in clothing. Doesn't matter if the clothes are the best to wear and most protective and most resilient against wear and tear. Were' in the era of tech-fashion including wearable computing.
> Not a word about any actual features or benefits of the Mac.
From what I recall almost every "I'm a Mac, I'm a PC" ad's premise was a feature or task that the Mac did easier or better, leaving the PC deflated or envious.
Back then people often bought magazines for the ads.
I still buy magazines for the ads. For example, I buy Mopar Action for the ads that are targeted towards me for things I might want or need for my Dodge. When I open the mag, I want to look at the ads.
This is fundamentally different from guessing what I want to see based on my browsing history. If I open a site on cooking, I don't want to see ads for car parts or kitchen faucets, regardless of my history. I would want to see ads for cooking supplies.
> If I open a site on cooking, I don't want to see ads for car parts or kitchen faucets, regardless of my history. I would want to see ads for cooking supplies.
Is that not the way it works for you? I mean, I pull up allrecipes.com and see a bunch of food-related stuff like my local supermarket (and one less relevant ad for Iceland Air, no idea). Closer to the subject at hand, the modern "Byte Magazine" might be something like tomshardware.com, where I see lots of tech products being hawked (phones, a tablet, Xfinity service), and ads for the retailers that sell them (lots of Best Buy on the pages I saw).
I mean, sure, there are going to be exceptions. But in general ads on the internet seem at least reasonably relevant.
It really seems sometimes like sites like HN are turning into information bubbles, where concepts like "advertising in the modern world is a dystopian disaster" are... just taken as faith? The experience of regular people doesn't really agree, and it seems like we're becoming more detached as the years go by.
Another problem is the same C++ training ad would be served on every one of my site's pages. Magazines don't run the same ad on every page.
Instead, I now run affiliate ads for quality programming books from a list I curated. Ads I would like to see myself when browsing those pages. Ads that probably add value to the page, rather than subtract. No more Batman or C++ training ads.
> Another problem is the same C++ training ad would be served on every one of my site's pages. Magazines don't run the same ad on every page.
No, but they ran the same full-page BASF floppy disk ad or whatever at the end of the contents page every month for two years. Repetition in advertising has been a thing since the field was introduced. I can't believe you're only seeing it for the first time now. Even today, go pick up a Motor Trend and compare it to a Car & Driver (or Vogue and Elle, whatever floats your boat) and take a look. They're all running the same ads!
Now, sure, it's true that online ads afford the opportunity to saturate that print doesn't. But it's not any different at all. And it works! Which is why the advertisers (who, let's be clear, know their business a lot better than you do) do it.
With all respect, that sounds like a criticism of internet advertising c. 2008 or so. I mean, sure, weird stuff like that happens and there are always going to be anecdotes. But no, for a long time now advertising on targetted/niche/interest-based sites has followed that niche, for the obvious reason that that's where the best ROI on the advertising is.
I mean, sure, there will always be funny hiccups, and on the edges there are genuine issues of privacy and justice and market fairness to be discussed.
But the idea that we're in some kind of advertising dystopia is simply not the experience of regular users. It's a meme[1] being perpetuated in the tech community. Regular products purchased by regular people are being advertised very effectively, and on the whole with near-universal approval of the customers.
[1] And as mentioned, an increasingly detached and frankly slightly deranged one. Real concerns about privacy are now being short-circuited with nonsense about "But Their Ads", and that's hurting the discourse we actually need.
Have you considered that your experience may not be representative of everyone else's experience? You can't know the objective facts of what ads Walter sees. You also can't know how people experience ads subjectively.
For myself, I have a really hard time focusing on anything in the presence of visual or audio distraction, with ads just being one example. I wear earmuffs all day while working just so I can focus. The equivalent on the internet is adblock. Without adblock and uBlock origin's element zapper, I simply cannot function on the internet today.
Are you implying that my sensitivity to distracting noise in every aspect of life is somehow influenced by a meme about internet ads?
I don't see how you could read what I wrote and take it to mean that it's not possible that anyone ever, anywhere, saw a poorly targetted ad on a web page somewhere. In fact I see my point as sort of the converse: there's a deeply annoying undercurrent in HN discourse that seeks to use the word "advertising" as a shorthand for all sorts of ethical problems that are complicated and nuanced.
When, no, advertising is doing what it always has. Internet advertising, broadly, is well-targetted. It just is. (For really obvious reasons! Of all the people who want ad targetting to work most, the advertisers and the ad brokers are at the top of the list!)
Today's marketing is much more targeted and dark pattern driven.
There was an earnestness to many ads back then. Either "native" style ads, or price lists -- given researching anything was much more difficult without internet, magazines and print brochures were all you had. Ads were critical.
My perception of ads is they are more automated and naive than they should be.
I get an ad for something I want. I get it. Then I don't need it anymore, but I get the same ad over and over again as if it went from a single thing I wanted to a hoarding obsession.
Then by chance I get an ad for something I want, but I want it later in the year, not today. I click the ad but don't get it.
When I want it, after few months, it is never shown again, and I get dozens of generic unrelated ads instead.
I bought hundreds of computer shoppers. Just because it was ads. I think there was an article or two in there sometimes. It is funny enough where I learned the xor trick. That thing was a monster at least a couple of inches thick of nothing but computer ads.
As I recall they included just enough editorial material to qualify for the "literature" postage rate rather than the "advertising" postage rate, which was higher.
(those might not be the exact terms that were/are used)
Because it was a time of magic. Every month you could find things that were significantly better, significantly cheaper, or completely brand new. (And by "completely", I do not mean "a new product that does the same old thing". I mean something I'd look at and think "I never even thought of doing that with a computer".)
There was just a newness and excitement, and the ads show it.
Yea, I mean I was born in the 90s, and I have to say the ads are really really enjoyable. I love the amount of depth and detail, and the longer-form article style that they use.
And the beautiful serif typefaces (including in the titles!) and wild colors too! These days everything is Helvetica and friends, and bland shades of gray.
Imagine how I feel! I was heading into my sophomore year in high school in 1983, the same month this article came out. I wasn't a subscriber to BYTE then, but the memories of us learning Assembler and C at the time are almost as vivid now as they were back then.
I didn't realize it was more than 50% ads, more than most websites we consider terrible nowadays.
I think one reason is that back then, it was the way of keeping in touch with the commercial offering. Now, there are millions of review websites and user groups for that. A quick search can least you to the most niche products easily. As a result, ads just try to sell you stuff you are already aware of, instead of informing you about a new product and its capabilities, making them a lot less interesting.
And even ads about shady products were kind of interesting.
Funny how the prices haven't changed much (seeing a "new computer" for $1,995), given the change in computing power and purchasing power (of the dollar)
I worked in a bookstore at the start of the 1980s. Its magazine section had many hundreds of titles (I stocked them). All of them were more or less that way: mostly ads by page area and page count, with the ratio of ads to content rising as you got toward the back of the magazine (but with most magazines maintaining the last couple of pages for distinctive editorial content).
Magazines are generally much slimmer nowadays. Magazine racks are generally smaller and less ubiquitous. I presume the ad dollars have mostly moved onto the web, and the great majority of magazines have shrunk—many disappearing entirely.
Love retro magazines. Byte had some great covers. I recently flipped through an old Compute! magazine looking at the type-in programs for different computer platforms at the time (IBM PCjr, Commodore, Apple, Atari).
I’ve been unable to convince Dall-e to render anything “in the style of artist Robert Tinney”. I love his work - and have several prints of his byte covers.
It was the September 1983 issue of COMPUTE! that opened the gates to my life as a programmer. They had programs that did interesting things but that also came with clear explanations. A while later they had a type-in word processor named SpeedScript that I used for years in the early part of my technical writing career.
Memory of being at 3Com, prior to 1991, and talking to a bunch of engineers about machine portability and the lack thereof. I said, "You probably could define a subset of C such that really portable code could be written."
One said, dismissively, "Yeah. Dream on."
Not a real bright guy, as it turned out.
1991: joined Oracle, where they already had a whole style book listing things you could and couldn't do in C, including naming conventions so that your names would port to every one of the 90+ platforms they supported. (spoiler: it was 6-character names, later expanded to 8.)
Nobody wanted to write a new linker for whatever machine they targeted. Finally, GNU made a portable linker and saved the world. It might have been the first to support 32-character symbols, just about enough to link C++ programs without misery.
Despite the historical value of the ads I quite dislike the ratio to content. I wanted to quickly scan the pages to get the gist what was the merit of C compared to its contemporaries and I spent half the time finding the actual content.
I remember being much better at only seeing the content when I was reading magazines like this in the past but I still wouldn't like to return to those times.
Also I think if it was possible to effectively ban all of the modern marketing techniques as many people want now the economic logic would result in paid magazines with content to ads ratio of 30 to 70.
At the time people bought computer magazines mainly for the ads. The articles were filler. Newspapers, too: editors in newsrooms actually called the reportage "filler".
BYTE differed in its filler being of typically better quality, but Computer Shopper was much, much bigger, and much more popular despite its execrable filler because it had more and better ads.
This is so true. I remember back in the day reading computer magazines for the ads, simply because you learned a lot about what new things were coming out. It allowed my young mind to dream about the possibilities. Sometimes the articles somewhat matched the ads, in terms of being modern, but most of the articles were not on the bleeding edge. PCMag probably had a 100:1 ratio of ads to content, if I remember right.
I don't know about "mainly for the ads," but I did enjoy them in the early 90s. Moore's Law was going full steam, and hard drives, RAM, and processors were all just barely fast enough to run the latest cool games. Every month you'd see ads for computers, components, and peripherals that were better in ways that really mattered. Computer Shopper was a phone-book-sized candy catalog.
I don't play leading-edge games anymore, so personal computers have been plenty fast for me for years. But I wouldn't mind reading Byte with 50% tasteful, non-creepy ads.
The ads were important yes, very much so. But the content was as well, even in the CS they had a few decent regular columns and features as well. I'd say people bought Byte at least as much for the content, CS more so the ads.
But I do miss those magazines and times, certainly was a lot more "fun" and interesting than today.
From personal memory in publishing in the early nineties, USPO rates publications carrying less than a third of paid advertising pages as first class postage. A impossible cost. This had the useful effect of ensuring that unsold pages went at steep discounts, increasing as your imposition deadline approached. Imposition is the DTP term for layup, the arrangement of pages over a webb offset press to paginate correctly after folding and cutting. With inventory so many options expiring P=1 worthless, I ended up involved in a kind of early computational advertising business. What was very different and impossible to find web publishing equivalent for, was the far better observabity and discovery of the pre long tail in print advertising trading. Lots else looked much like it superficially does today for online.
Incidentally I think that multi month circulation delays were almost always caused by the International Postal Union rules for Direct Injection of bulk mail at wholesale rates. Very small countries you'd need a minimum of 5,000 items per lot. IPU rules effectively created a hysteresis inflection around global readership acquisition and acquisition costs that pumped advertising price cycles.
This is why I gave up on magazines in general (even before it was cool). It seems like their entire goal was to make actual content hard to find. The cover would have a list of headlines, then you'd look at the table of contents to see where that was, but it'd be under a different headline there. Then you'd finally get the page number, but around that spot, none of the pages had page numbers, and when you finally find what you're looking for it has an even different title than the cover and TOC. Then, once you start reading, you get to "continued on page ...". And again, none of those pages have page numbers near them, and there'd be yet a different title on the continued part. Not to mention most of the article would be fluff anyway.
In 1983 I was 6 and already starting to write code daily, using BASIC. But because the lack of something like Internet I was so culturally detached from real world IT that I learned C only in 1997: 14 years later.
I was in a similar situation; though I'm roughly a decade younger. I grew up in the BBS era, and experienced the internet first as gopher and ftp access (archie!) through the local schoolsnet. I bought "Using Linux" by Sams in 1995 solely to get access to GCC, because a C compiler was extremely difficult to come by.
And yet, within a year they became commonly available, for DOS and Windows even, thanks to DJGPP. The local schoolsnet added SLIP support, and we could access Delorie's website. While Microsoft and Borland were still charging an arm and a leg, and GNU couldn't be bothered to support non-free systems, it was Delorie who created that bridge to common users and opened the world of C programming to us.
Similarly, I only learned about C via the Microsoft QuickC manual, included with the Gateway 2000 ‘486 “Programmer Pack” option (also: Visual Basic 1.0!) in 1992.
How I miss bookstores.. I realize that they still exist, but for me as a (mostly average) comp sci student, the bookstore was the place that I could find the code snippets and algorithm solutions for my classes. I spent hours just reading programming books, magazine articles (like Byte)..taking notes..remembering techniques.. couldn't wait to try them out. Pretty nerd-y now that I look back on it. I actually found it kind of fun.. although this is the first time I've admitted it (taking the first step is the hardest). Now it is stack overflow and google, as we all know. good times.
I've never really embraced 'C'.. I thought case sensitivity was an anti-feature, along with null terminated strings.
However, with my adoption of an old forth dialect - mstoical and a desire to play with kamby, it's time for me to add a new SSD, and install Ubuntu 22.04 LTS, and get to work knowing this thing I've avoided forever.
Perhaps, eventually, I can port a sane strings library to C, like the one in Free Pascal.
Was 14 and on a Friday boss gave me some money to go find a book on C. He would take care of the expense form. I read the book over the weekend, and he had me writing code on Monday. (The book had a diamond on the cover and didn't turn out to have the popularity of K&R). He told me to find example code and copy and paste it. I wrote a terminal emulator and a manufacturing quotation database (he provided me with greenleaf libraries). They both ran fast. All that accomplished and I didn't know pointers well enough to be able to teach them. This was in '86. I also had a full collection of Byte Magazine which was motivating. It is so less nerve-wracking writing code today, knowing what to do if there are bugs in the language or libraries.
It's hard to get my head around what it would be like seeing C-like patterns for the first time if I was someone that already had a background in COBOL and Pascal. At my university, CS101 was taught with both COBOL and Pascal, but I had already had some C and 6502 assembly in the mid-80's. COBOL seemed like assembly with words instead of opcodes and cryptic operands. Pascal seemed like a more user-friendly C.
I can see why C is preferable to COBOL as the world moved to more commodified OSes and something better than assembly was needed for drivers & kernels, but it would be interesting to know what Pascal "idiosyncrasies" turned people to "portable" C. Any old timers here care to weigh in?
I was experienced with PDP-11 assembler when I was given a copy of K+R. Since many of C's semantics are based on the peculiarities of the -11 instructions, I understood it immediately. It was a breath of fresh air compared to Fortran or Pascal. I never wrote another line in either of those languages after reading K+R.
Dennis Richie reports that the increment operators actually came from B on the PDP-7. It didn't have auto-increment addressing modes, but it did have memory locations which increment when read.
"This feature probably suggested such operators to Thompson; the generalization to make them both prefix and postfix was his own. Indeed, the auto-increment cells were not used directly in implementation of the operators, and a stronger motivation for the innovation was probably his observation that the translation of ++x was smaller than that of x=x+1."
C had pointer arithmetic which let you do systems work that you could not do in PASCAL, by the later 1980s Turbo Pascal had extensions to fill the gap and I liked it better but school had me using Unix workstations that didn’t have Turbo Pascsl so I switched to C.
In the early 80s, they didn't. Pascal was more or less unusable on the PC. Pascal remained unusable until it got a boatload of extensions. The trouble was, every Pascal added different extensions, making Pascal unportable.
The PC and clones were very slow writing to the screen if you used the BIOS or DOS so it was widespread practice for application programs to write directly to the memory-mapped screen, do I/O space operations, register interrupt handlers, etc.
Thus application code often looked like device driver code, maybe you (the application developer) wrote code that wrote to the screen directly or you used libraries that did. By the late 1980s there were text-mode UI frameworks that supported resizable windows, the mouse, and widget sets like you’d use in a GUI application today.
Even the bad early C compilers were far more usable than Pascal. I know this, because where I worked at Data I/O we tried a whole bunch of them - Pascal, Fortran, and C.
Kernighan (and P. J. Plaugher) had written a book called "Software Tools". It was supposed to give several reasonably-lengthy examples of software that did actual useful functions. It was written in RATFOR, which is a pre-processor for FORTRAN. Some time later, they re-wrote the book to use Pascal, calling it (predictably enough) "Software Tools In Pascal". After writing it, Kernighan wrote this paper, basically because he was thinking "That should have been way easier than writing the same stuff in RATFOR. Why was that so hard?"
I used Pascal for two years professionally, and many of the issues in the paper I ran into. Pascal was just clumsy to use. It was a good teaching language, but not good for professional programmers in many cases. (C, on the other hand, was written by people trying to write an operating system, and turned out to be a decent language for writing operating systems in.)
Note well: All of this is true of the original Pascal. Things like Turbo Pascal improved it and made it actually a usable language. But even that wasn't portable - there wasn't a Turbo Pascal for anything other than the IBM PC, so far as I recall. And every other "improved" version was different from Turbo Pascal, so there was no portability between extensions either.
> there wasn't a Turbo Pascal for anything other than the IBM PC, so far as I recall
There was a z80 version of Turbo Pascal that ran on CP/M machines (incidentally, one thing that’s striking about the first several years of BYTE is how many huge Cromenco ads there are) as well as the Apple II with a Z80 card. That, along with x86 support, covered a lot of ground.
Let's just ignore the C dialects outside UNIX like Small-C and RatC, or that we had to wait until 1990 for proper standard, and not even K&R C was a given outside UNIX.
At one point in the 1980s I counted 30 C compilers available for the IBM PC. Programming on the PD dominated programming in the 80s, hardly anyone had access to Unix machines. Probably 90% of C programming was done on the PC.
The 1980s C++ compilers on the PC also dominated the C++ compiler use. C++ on the PC vaulted the language from obscurity into the major language it is today.
It probably depends on what time in the 80s as well; TFA is from 1983.
A while back I chatted with someone who worked on both C and Pascal compilers around that time period and got the impression that the majority of their customers were people running on 68k based Unix workstations. May have just been their niche I suppose.
I didn't start programming until closer to 1990, and started with Mix software's C compiler on a 286, because that's what I could afford.
I also used Mix C. I think it only sold for about $20 (plus $20 for the debugger?). It also came with an electronic tutorial ($10 more?) called "Master C" that I found very useful.
Not in Europe it did not, it was all about QuickBasic, Turbo Pascal and TASM over here.
And if we go into the Amiga it was about Assembly and AMOS mostly.
On Apple, Object Pascal and Assembly, HyperCard, MPW with C++ came later into the pictures.
On thing I do agree, by the time Windows and OS/2 were taking off, C++ on the PC was everywhere and only masochistics would insist in using C instead of C++ frameworks like OWL, MFC or CSet++.
Europe has many countries, I can assure you that I only saw Zortech on magazines after it was acquired by Symantec and was shipping MFC alongside with it.
Sadly I never saw it anywhere on sale, as the graphical debugging for C++ data structures was quite cool to read about.
The delay in publishing a "proper" standard was due to the incredible success/usefulness of the defacto K&R standard. But as you point out that was hard to find outside of Unix. I suspect this was mostly due to the effort required to implement the full standard library and/or resource limitations on many systems.
For example, there was a Small-C compiler available for the Atari 800 in 1982:
"... based on the Small C compiler published in Dr. Dobb's Journal"
If you look in the beginning of the manual it has a list of what is and is not supported. They claim it is sufficient to compile C/65 itself but there are lots of things we take for granted missing.
So it is kind of ironic this revisionism how great was C "portability", when in reality it was full of dialects outside UNIX just like the competition.
Kernighan had the experience then of writing "Software Tools in Pascal", updated from "... in RATFOR". RATFOR was a pre-processor for FORTRAN that made it look sort of Pascal-ish. People could use it to code and run unixy utilities on their machines that only had a FORTRAN compiler. Good times! Getting those translated to Pascal was a big enough nuisance to motivate the essay.
Modula-2 had already lost that race. It was another extended, incompatible Pascal among myriad others mostly still called Pascal. Scud-pascal, Apollo Pascal, Clascal, Turbo Pascal, what-have-you.
Code for any that lacked features would still build on a more complete implementation.
This is fundamentally different from the Pascal case where extansions absolutely necessary for the compiler to be useful at all differed radically from one to another.
If you know assembly, Pascal took away a lot and gave little in return, and it was verbose. I worked on a 80x25 monitor so the verbosity was annoying. Only functions could return values, but they were not guaranteed to be pure. No early returns.
I also remember there were library functions with variable # of elements, but you could not write them (variable arg functions) yourself. Really hated that.
Not all of these were fair criticisms, but they were enough to switch for me.
There was no internet. Anyone who aspired to have more than a few other people read their work had to find someone willing to publish it. The shortage was not on the writer side, it was on the publication side.
I know it is not directly related to this magazine, but this nostalgia brought back to me the name of my favorite magazine those days: Ahoy!
And of course, who can forget the wonderful adds for the games from Infocom (Zork et al), who would "stick their graphics where the sun don't shine!". They were text based games, and very succesful on those days
They are still available (some released freely by the publisher, others not), and the paraphernalia (which often contained information required to play the game) can be found at http://infocom.elsewhere.org/gallery/greybox.html.
My attempt to learn to program in the early 90s took me to a small business college, and I was eager to learn C. Our instruction was so execrable half of the class took off after the evening's first break and spent the rest of the night eating calamari at the restaurant across the street. Eventually our complaints gained sympathy in the office; the teacher was fired and we all got full passing credit for two quarters of C.
Unfortunately I didn't learn the language well enough to ever use it.