My first paid programming job was writing a Logo Adventure program for C64 Terrapin Logo. They wanted a simple non-graphical game that showed off Logo's list processing capabilities, and I just used Logo's top level interpreter as the parser! That saved a lot of coding, but it did let you cheat (aka "learn to understand and program and debug Logo").
Here is the source code to LLogo in MACLISP, which I stashed from the MIT-AI ITS system. It's a fascinating historical document, 12,480 lines of beautiful practical lisp code, defining where the rubber meets the road, with drivers for hardware like pots, plotters, robotic turtles, TV turtles, graphical displays, XGP laser printers, music devices, and lots of other interesting code and comments.
BIBOP is the dynamically expandable version of MACLISP, the SAIL standard MACLISP. Essentially, the main advantage of BIBOP is that whenever one of the expandable spaces runs out of space, BIBOP requests a larger core allocation from the monitor and the delinquent space grows in the allocated memory.
December 1973; updated March 1974
The (in)famous "Bibop" (pronounced "bee-bop") LISP scheme has been available for some time now and seems to be more or less reliable. Bibop means "BIg Bag Of Pages", a reference to the method of memory management used to take advantage of the memory paging features of ITS. The average LISP user should not be greatly affected in converting to this new LISP (which very eventually will become the standard LISP, say in a few months).
WEDNESDAY FEB 13,1974 FM+7D.3H.28M.38S. LISP 746 - GLS -
AS OF VERSION 746, LISP SYSTEMS OF THE SAME VERSION WILL SHARE
PAGES AMONG THEMSELVES; I.E. A LISP, MACSYMA, AND CONNIVER CAN
ALL SHARE PAGES COMMON TO THE LISP SYSTEM.
BIBOP LISP HAS BEEN GREATLY RESTRUCTURED; I WILL RE-EDIT THE BIBOP
DOCUMENT AS SOON AS I CAN (SAY WITHIN THE NEXT WEEK). -- GLS
Franz Lisp used BiBOP too, but not in any way that worked with the paging system of the host machine, it was just for determining the type of an object.
I looked into this a bit and zeroed in on the implementation of sine/cosine computation, which is pretty critical for turtle graphics.
I didn't follow every last instruction, but it looks like it used a 91 (or maybe 92)-element LUT of 4-byte floats, split into 2 halves. This allows direct look-up of the SIN of every angle in the first quadrant, that is 0 to 90 degrees, plus the value for the next 1-degree angle.
It looks like after looking up the two precalculated values, linear interpolation is then performed between them.
(Also, the FP numbers are in a format that must have been easy to do software arithmetic on. Leading byte is exponent, with bias of 126; next 3 bytes are sign and mantissa, with no implied 1-bit. This has just a hair less precision than a modern 32-bit 'float'. 7A 47 7C 2D is the entry for SIN(1), and is equal to 0.017452407... while double precision SIN(1) is 0.0174524064372835...)
Next, I bodged together a version of what I assume the linear interpolation step does. In both double precision and simulated LOGO precision, the maximum relative error is about .00005 and the maximum absolute error is about .000045, which would be awful hard to detect when plotting to an entire display with about as many pixels as an application icon on your modern high-dpi telephone. On the other hand, that's only 4 correct digits.
This is an interesting contrast to the Commodore BASIC implementation of sin/cos that I'm familiar with. It used a fairly high degree polynomial, which might have gotten a few more accurate digits (5 digits for sin(1.5 degrees)!) but is sure to be quite a bit slower than 2 table lookups, a multiply, and a few add/subtracts. (plus range reduction)
0.0261759515 Estimated Apple LOGO SIN(1.5 degrees)
^
0.026176948307873153 Modern Double Precision SIN(1.5 degrees)
0.0261769483 Commodore 64 BASIC SIN(1.5 degrees)
^
The only time anyone would notice slight errors in turtle position was trying to do animation by drawing over a previous drawing with an XOR or ERASE or background color pen. It might be necessary to explicitly reset the turtle position to the original start. I do remember having to be careful with the anti-aliasing so that when pointing in any random direction FD 100 RT 180 FD 100 would draw the same pixels in either direction.
If you don't mind me asking, did the book do any explaining of why the magic computer words were in a foreign language (English)? Had you been exposed to much English already when you picked up the book?
As a native English speaker I feel fairly ignorant of how programming is taught in other (human) languages. I recall a quote [1] by Eric S. Raymond commenting (or perhaps speculating?) that Linus commented his kernel code in English because it never occurred to him to do otherwise. But I'm curious for your perspective as I imagine the experience of a child learning from a book is quite different from that of an adult who is immersed in the English-dominated "hacker culture" on the internet.
I'm not the GP, but here's my perspective learning programming in a 3-year technical high school program in Brazil (I had a little bit of exposure before, with Windows batch files and shell scripts).
We started with a language called Portugol or "structured Portuguese" which is something like a formalized pseudo-code language and was used to teach the basic concepts of structured programming: variables, control flow, subroutines.
We spent about three months working with Portugol exclusively, never actually running it on a computer, but just for understanding programming logic. After that we moved to C and eventually "C with classes" C++. Comments were still written in Portuguese.
The transition to C was fairly easy in my experience, because the actual English vocabulary you need to understand to program in C is very small, and because the concepts themselves were taught in Portuguese with Portugol, so you only had to do dictionary translation, really.
Seems a shame that there wasn't a Portugol interpreter/compiler handy to at least see things running on a machine before moving to C. Probably would have been a good class assignment to more advanced students to build something like that.
How awesome would it have been to be a high performing student, writing programs that a lower performing student would have to execute manually on your simulated computer system. So much good stuff here. Do you want to layout the nand gate or BE the nand gate? Or be the one that decides which problems the nand gate solves?
"Portugol" is a great pun on "Algol"! Are there also programming languages called "Portutran" for scientific computing and "Portubol" for business data processing?
We never compiled or ran Portugol (IIRC, it was so vaguely defined that you'd be hard pressed to implement it), so practical matters like how applicable it was to scientific computing or data processing was never a concern.
I always wondered how non-english speaking developers from other countries who were first adopters of HTTP (or any new tech) reacted to words like "GET" and "Content-Type" and other things that were clearly invented by westerners. What would we think if we had to implement something new and use words we didn't understand?
The way I learned programming as a kid, while also simultaneously learning English, involved looking at the new weird term, grabbing a dictionary, and trying to figure out why it's named the way it is. Sometimes it was obvious, sometimes it wasn't, but generally it wasn't difficult - after all, this is just vocabulary. Single words. Pretty much the easiest part of learning a language.
I can tell you a similar anectode. My family used to live in a eastern Europe communist-at-the-time country. Computers were hard to come by and the only place you could get good hardware was in Germany.
My dad went to Germany and bought an Atari (I forget which one exactly), and realized it was all in German. Seeing that he didn't know any German, he learned it enough to navigate the UI.
In other words, the time cost of learning German was less than the cost of the computer + the trip to Germany. Basically necessity.
I wonder how often it is an advantage to not know English. Then you might have less baggage from outside the computer and you might map the token to what it actually does in the system (rather than what the token was aspiring to be). Assuming you can get to that point, of course.
I think you underestimate the number of people who can speak English. My family is Afrikaans, and it is our first language. My father taught himself to program with op codes on rudimentary computers back in the 70s. Then he learned assembly and later C. The English words never bothered him or his brother when they learned to code, because they could already speak English.
It was the same for me. I started learning in VB.net and then Java/Python. The English syntax never bothered me because I started learning English when I was 7 years old.
Trying to code in another language would just be a nightmare. So it makes sense to keep the syntax in one language (English in this case) for consistency.
I think I had the same book as the GP -- or follow up book at least: "Superlogo voor kinderen" (Superlogo for kids) from A.W. Bruna publishing. All commands were translated to Dutch. At the time, I had no idea I was using a LISP like language, but I recall being incredibly excited when the diskette arrived in the mail. I drew so many spirals with that little turtle. It had built-in sprite and sound support as well making it a truly early kids oriented and quite feature-complete language.
My native language is Dutch (Flemish, technically). As a Belgian, it is not that weird to me. We don't dub things on tv, so we grow up being fairly familiar with English.
So when learning programming it did not strike me as 'weird' that the language was English.
Nowadays, because of online gaming, it is probably even less of a problem for kids.
IIRC there was a dutch-language version of logo. That said I personally learned english by trying to parse the manual for game maker and asking my parents lots of questions. Of course I found out many many years later that a dutch-language version of the documentation was available from a third party.
It never occurred to Eric Raymond to do otherwise than to speculate and pontificate about what Linus Torvalds thinks (or how intelligent black people are, for that matter), instead of actually asking. (See "Linus's Law", which is should actually be called "Raymond's Wishful but Invalid Speculative Fallacy". But he cleverly named it after Linus, so he wouldn't get blamed when it turned out to be wrong. Heartbleed!)
I'd hazard a guess that Linus deserves credit for thoughtfully writing Linux in English on purpose, instead of just assuming that "it never occurring to him to do otherwise".
On the other hand, Yuri Takhteyev wrote a book called "Coding Places: Software Practice in a South American City" (MIT Press, 2012), about the social side of software and the culture surrounding the Lua programming language, developed at the Pontifical Catholic University of Rio de Janeiro in Brazil. And he actually spent three years interviewing the developers to find out the what the facts were, instead of speculating.
The name of the language Lua is Portuguese for "Moon", and the developers' native language is Portuguese, but the Lua language, code, documentation, mailing lists, and papers themselves are all in English.
He interviewed the developers and members of the global community, and examined why it had become so successful (modulo [or in spite of] the fact that it was extremely well designed and implemented, of course). His conclusion was that the developers made a conscious decision to write the code, comments and and documentation in English instead of Portuguese, because it was the de-facto language to use if you intended to foster an international global community around the language, which they did.
In “Coding Places: Software Practice in a South American City” Yuri Takhteyev depicts a group of developers from Rio de Janeiro working on software projects with global aspirations. His ethnography, conducted in the span of three years, provides rich detail and insight into the practice of creating a programming language, Lua, and struggling to form local and global communities. In his narrative, Takhteyev sets off with a task that is particularly akin to anthropological studies of globalization: to specify socioeconomic and political forces shaping localities and creating instances of production and circulation of transnational scope. We asked him a few questions related to the book and his research on the topics of globalization, computing expertise, and politics of information technology. Enjoy!
Coding Places. Software Practice in a South American City. By Yuri Takhteyev
An examination of software practice in Brazil that reveals both the globalization and the localization of software development.
Software development would seem to be a quintessential example of today's Internet-enabled “knowledge work”—a global profession not bound by the constraints of geography. In Coding Places, Yuri Takhteyev looks at the work of software developers who inhabit two contexts: a geographical area—in this case, greater Rio de Janeiro—and a “world of practice,” a global system of activities linked by shared meanings and joint practice. The work of the Brazilian developers, Takhteyev discovers, reveals a paradox of the world of software: it is both diffuse and sharply centralized. The world of software revolves around a handful of places—in particular, the San Francisco Bay area—that exercise substantial control over both the material and cultural elements of software production. Takhteyev shows how in this context Brazilian software developers work to find their place in the world of software and to bring its benefits to their city.
Takhteyev's study closely examines Lua, an open source programming language developed in Rio but used in such internationally popular products as World of Warcraft and Angry Birds. He shows that Lua had to be separated from its local origins on the periphery in order to achieve success abroad. The developers, Portuguese speakers, used English in much of their work on Lua. By bringing to light the work that peripheral practitioners must do to give software its seeming universality, Takhteyev offers a revealing perspective on the not-so-flat world of globalization.
Mine wasn't exactly the same, but yes! I think the first program I ever wrote was in LOGO for the TRS-80. I wound up playing more with its built-in BASIC console than my LOGO cartridge though.
Note there were two versions of Logo for the Apple II. The other was developed by Logo Computer Systems Inc. (I was one of the staff). Our version was written first in Lisp (on Lisp Machines) then manually compiled to assembler. The Lisp development environment included a full 6502 emulator, allowing the developers to execute Logo either at the Lisp level (fast) or to drop down into the instruction level (slow). The emulator also emulated the Apple II's graphics.
There is a gray area there. But we are talking about translating an existing model program and algorithms, not looking at the program as a black box and recreating its behavior.
As the person who translated the Pascal Logo model to TI9900 assembler it was definitely the case that at the beginning of the project I translated every line of Pascal to a few instructions of assembler. By the end I was just looking at the gist of what a routine would do (its “contract” as they say) and writing it in assembler as you would new code.
It’s much like translating a literary work. There’s a famous quotation (which I’m not comfortable relating in full) about translations being either faithful or beautiful but not both.
Essentially maybe I became a fluent native speaker of 9900 assembler.
Wow, what a blast from the past. My introduction to programming was on an old Apple II with BASIC and LOGO. I was in the 6th grade at the time, and never did anything as complex as looping to draw spirals - I basically used LOGO like an etch-a-sketch.
Me too.
I really liked Logo. I worked as "child volunteer" at and art museums "festival of the future" making logo spirographs for interested visitors. It was the early 80s and the apple ][+ 's 7 colors (2 white and 2 black..) was pretty bad compared till today, but we didn't care because not much was better.
Basic was built in so I ended up going back to that. I learned logo, pascal and basic, but being built in and not having access to pascal at home made my choice.
I did return to logo in high school on macs for some neat fractal and equation graphing programs.
Hey! Hey! Stop bad-mouthing the Apple II's graphics capabilities. I'll have you know the computer has not 7 but 8 high-resolution colors (of which two are white and two are black, but who cares about being a little wasteful when you have so many colors to choose from?).
Brings back old memories of the Beagle Bros and Winfall (Windfall?) magazines and things.
I recall seeing LOGO and turtles in some of the magazines but never got the chance to try em out. Back then I used to try and just type out snippets of what in hindsight was just pseudo-code from all sorts of places hoping it worked, then attempted to enter all the machine code for an assembler that came in a big red binder (didn't work, despite checksums, but you also had to write the 'editor' so I stuffed that up I suspect).
I started on the Apple II with BASIC. Then, some computer teacher convinced my parents that I should be learning LOGO instead and gave me lessons. After spending some time on it, I found it sufficiently frustrating that I actually gave up and went back to BASIC.
I don't recall exactly that it was that I didn't like about LOGO back then. One thing probably was that I needed to first boot up into a "special environment" to run my code, and thus it didn't feel like I was writing a real program for the machine. Another might have been the emphasis on moving a turtle around, versus actually doing anything with text.
Alan Kay was quite inspired by a groundbreaking game series called "Thinkin' Things," which had a visual blocks programming language for controlling and drawing colorful patterns with marching bands, football players, and cheerleaders: "Let's build a halftime show"!
>Alan Kay on "Etoys, Alice and tile programming": "This particular strand starting with one of the projects I saw in the CDROM "Thinking Things" (I think it was the 3rd in the set). This project was basically about being able to march around a football field and the multiple marchers were controlled by a very simple tile based programming system. Also, a grad student from a number of years ago, Mike Travers, did a really excellent thesis at MIT about enduser programming of autonomous agents -- the system was called AGAR -- and many of these ideas were used in the Vivarium project at Apple 15 years ago. The thesis version of AGAR used DnD tiles to make programs in Mike's very powerful system."
I did the same in grade 7 on an Apple that was shared between two classrooms, and almost never used. I probably used it for an hour that year doing different diagrams in LOGO. Never used any of the conditionals or list processing features of the language though. For someone into computers at that age who had tinkered with BASIC, it was unforgettable.
The Apple in my classroom was also never used for the most part. And with no hard drive and no spare floppy disks, I couldn't save any of my work anyway which made it hard to learn past a very basic level. But it planted the seed, and allowed me to hit the ground running with QBasic :)
Later on assemblers like Merlin and ORCA/M added the ability to assemble source files as separate objects which were then linked.
Earlier assemblers (and the later ones!) also typically supported a capability to reference another source file from another which lets a large assembly chain itself without needing to have the whole thing in memory at once.
At runtime, you could use overlays to essentially load in program modules in and out of RAM dynamically so you'd be surprised at the (relative) sophistication of some software packages back then!
FWIW, VisiCalc - first spreadsheet computer program for personal computers - was developed in 1979 on the MIT Multics system, which was a Honeywell machine, and Electric Pencil - first word processor for personal computers - was (as far as I can tell) developed in 1976 on the Altair.
A PDP-11/45 or 11/70 would have much more memory, but being a 16 bit machine would still have been unable to handle this file in a conventional editor. Honestly I don't know if there were paging editors available for common environments. Certainly nothing on Unix would have worked.
Per the link, the file was editted and built on a PDP-10 running ITS, which was a 36 bit machine with an 18 bit address space.
Interestingly, almost everything else I've seen from ITS uses the 6 bit upper-case-only encoding. This is pretty clearly ASCII.
I watched an interview with a Sinclair Spectrum programmer who used a TRS-80 computer and a custom hardware interface to send code assembled in the TRS-80 directly into the Spectrum's memory to allow for quicker development and use of floppy and hard disks on the TRS-80.
Using a serial bus to remote build/debug an application device is a venerable tradition. I know I still do that with USB and Android/iOS devices connected to a PC.
Summer 1981, 11 years old, learning logo on a ][. It sure beat the hell out of trying to draw a straight line in HIRES mode in assembly: goddamn skipping every six horizontal lines made no sense to my pre-teen brain that couldn't grok modulus.
6502 assembly is so nice, I wrote some apps in the early 80's. You can learn the language in a few minutes. But all sorts of interesting clever things were possible.
6502 assembler on C64 is basically what taught me how to program. It was just as important to learn the memory map so that you could access low-level features by reading and writing at special memory addresses. The Compute! books were especially useful.
It's so cool to look at that 6502 assembly. I had an atari 400 computer, and I had this cool book that taught assembler for 6502 and I hand compiled my programs into bytes and put them in a data program and I could call them from basic. I had this great book called "De Re Atari". Those were fun times. I was in high school and the future was unlimited!
Hot diggity dog! Like lots of folks, LOGO on the Apple II was my first exposure to computer -- thanks to one Mr. Milligan in Westbrook, ME -- some of the other kids in my class did some pretty amazing graphics with it -- the one that is particularly clear in my mind was a well-proportioned and '3D-type' rendering of the brandmark of the band 'Foreigner' -- I had no idea at the time how to even pronounce that word, but the fact he'd done it with the turtle sure made an impression on me
Logo was my first programming language! I recently rewrote my own version of it so that I could draw a time-proportional train map. I wrote "fd 5" for 5 minutes, for example.
The version I wrote can read values from a spreadsheet, so could be used in other applications too. I think this kind of thing could be a great way for a young person to go from a homework project to the front of Hacker News. If any of you have kids, I'd be happy to help gather the data for some other subway networks!
Also, where should I contribute old Apple II software? I recently used ADTPro to copy a lot of old programs onto my MacBook Pro as DSK images.
I've also got a larger selection of DD floppies, HD floppies, JAZ disks, SCSI disks, IDE disks, MacFormat and other CDs. Please tell me if you want the links!
I remember (quite clearly) the day I was introduced to the Apple II+ and shown Logo. It was way over my (middle school) head- functions? recursion? so I went with BASIC and later Machine Language, but the ideas stuck and percolated and later helped me understand Emacs LISP and functional programming. I thought the integration with the turtle so that you could make graphics really was slick (compared to direct explicit mode in BASIC).
I'd love to get my hands on an actual physical Logo turtle.
Apple II did use ASCII. But ASCII defines 128 code-points. The other 128 code-points are specific to the character encoding or "codepage".
They don't like Apple's "idiot" character encoding (which is custom, Apple II specific), so they throw away the top bit with the AND leaving only ASCII characters.
Logo was critical in my understanding of sub routines and got me away from the dreaded GOTO command in BASIC, can't say I did much more than use it as an expensive Spirograph but it is certainly responsible for my ability to understand function calls and libraries and how use them in modern object orientated laguages. Just instinct for new programmers now a days, but in the 80s and early 90s it was a difficult concept for many to grasp.
Could this be a trick to use the top of stack like another register and use all of the one byte stack instructions on it? If it is using the top of stack as a register, where then would be the real stack?
Terrapin Logo for the Apple ][ and C64 came with a 6502 assembler written in Logo by Leigh Klotz, that they used to write Logo primitives (for controlling the turtle, etc).
It would be ambitious to make a self hosting 6502 Logo meta assembler, by porting the entire 6502 assembly language Logo source code to the 6502 Logo Assembler!
No, this version was never released. People with fond memories of Logo on an Apple II would probably have used a later version licensed to Terrapin. Or the Krell version. Or the completely separately implemented LCSI Logo mentioned elsewhere in this thread.
Good old 6 character identifier names. That really required some creativity, both to keep some descriptive value as well as to not generate identical identifiers by accident.
I found it in backups from ITS. ITS is the MIT operating system for the PDP-10, which was the workhorse around the AI Lab, Logo Lab, Lab for Computer Science, Mathlab, Dynamic Modeling group, etc.
Hmmm, I’m vaguely remembering uploading the TI-Logo source to the MIT ITS PDP-10 for backup purposes too. If so it might be there also. You’d want to be careful about the copyright on that. Though I’d love to see it again
That's why I try to run most of the finds by the original authors. I sent you an email with some more details; or at least I tried and it didn't bounce.
My first paid programming job was writing a Logo Adventure program for C64 Terrapin Logo. They wanted a simple non-graphical game that showed off Logo's list processing capabilities, and I just used Logo's top level interpreter as the parser! That saved a lot of coding, but it did let you cheat (aka "learn to understand and program and debug Logo").
https://medium.com/@donhopkins/logo-adventure-for-c64-terrap...
Here is the source code to LLogo in MACLISP, which I stashed from the MIT-AI ITS system. It's a fascinating historical document, 12,480 lines of beautiful practical lisp code, defining where the rubber meets the road, with drivers for hardware like pots, plotters, robotic turtles, TV turtles, graphical displays, XGP laser printers, music devices, and lots of other interesting code and comments.
http://donhopkins.com/home/archive/lisp/llogo.lisp