> In her “diagram of development,” Lovelace gives the fourth operation as v5 / v4. But the correct ordering here is v4 / v5. This may well have been a typesetting error and not an error in the program that Lovelace devised. All the same, this must be the oldest bug in computing. I marveled that, for ten minutes or so, unknowingly, I had wrestled with this first ever bug.
The real mark of a non-trivial program is that it doesn't work on the first try.
It's incredible how Babbage, frustrated that the mass production precision machining technology necessary to make his simple engine work didn't exist yet, decides that the best way forward is to design a new system an order of magnitude more complex and then go to Italy to find more advanced manufacturing somehow.
He'd want to do something, and hit a roadblock, so he'd design his own tool (He wrote his own font, once, because he didn't like the way the built-in ones worked at teeny point sizes).
Best damn engineer I ever knew, but I had to keep an eye out for rabbitholing.
Obviously yak shaving is a hazard in that it can result in the original project getting abandoned or deadlines being missed, but often the tools you develop along the way are more (economically) valuable than the original project. They're often more widely applicable and narrower in scope, so they're more likely to get done and more likely to find an audience.
An example that comes to my mind is the Rust library mio. The Metal database for which it was the I/O component never materialized. But mio is a core component in the Rust ecosystem.
Similarly, many applications could benefit from a font that's legible at tiny sizes, not just the one that it was developed for. (Though obviously in most work cultures, this would be considered inappropriate, and for good reasons. My remarks apply mostly to greenfield research/personal projects where deadlines are loose.)
We had a developer write his own System.out.println in Java. But it wasn't because he didn't like the built-in System.out.println, it was because he didn't know it existed. He had a PhD and was supposedly a senior developer. He didn't last very long.
He must have had a very interesting educational path to write functional Java code without knowing about println. I marvel at that. I almost admire it. Because I think pretty much every tutorial in the world for every language starts with printing, and every project has it someplace. He must have learned via some esoteric means.
Same here, and I have worked alongside engineers from both ends of that spectrum. The only thing the degree'd individuals seemed to bring to the table is that they typically did not know what they did not know, thus were much more subject to Dunning-Kreuger.
I once worked with a dev that implemented his own forEach in JavaScript. And literally used an ECMAScript version 6 feature to do so! (native forEach in JS showed up in ECMAScript 5) Meanwhile, that project already had jQuery and Underscore... which also had their own versions of forEach! It was a bizarre choice.
To be fair that was quite a late addition to the language all things considered. It could be he learned Java before it existed, or had most of his experience in 1.4 (which survived quite long in some environments because 1.5 was such a huge overhaul). But still, a good dev will keep up to date with language features.
My thinking exactly. He could've created a market for adding machines and, building on commercial success, use revenues to build more ambitious machines.
And he may well have done, if Ada Lovelace hadn't got cancer: she had managed to convince him to let her run his business, leaving him to deal only with R&D.
I didn't just create my own font, I created a font editor too because nothing else was just right (context: MSDOS 5/VGA era). For clarity, I was not that employee :)
Up until quite recently I used to render a fixed-width TrueType to bitmaps so I could fix some rendering and typographical foibles, in order to use it with rxvt.
I got the idea from another cursor I'd seen back then, and the basic tail was based on it. I think I did the animated ones, the blue dot was my own addition, and the fox head is entirely my own creation.
It doesn't actually flicker in and out like that, it seems to be some quirk from recording it.
+1 for Babbage/Lovelace history, however IMO, although the two facts are separately true:
(1) he was let down by precision machining not existing (Tim Robinson https://www.meccano.us/difference_engines/rde_1/ says that "I have no doubt that if the Meccano of the 1920's had existed 100 years earlier, Babbage would have been entirely successful in his quest"), and
(2) he designed a more complex system, tried Italy, etc,
I don't think it's fair to say that he decided that (2) was the best way forward from (1); it's rather that both were consequences of his ideas outpacing what was realistically feasible: he was a software guy thrust into hardware, coming with up ideas that seemed straightforward and discovering that manufacturing was impossible. Apart from his lack of business/project-planning sense (scope it down; don't aim for 10 digits etc), I think other complicating factors that went into the tragedy of Babbage were:
(1) He kept coming up with new/better ideas and pursued them (basically rabbitholing as mentioned),
(2) He had won a bunch of awards at a young age simply for proposing the Difference Engine (everyone could see it was a good idea and also seem to have expected it to be straightforward to build: a fait accompli) — so in the intervening decades he must have felt like he couldn't give up,
(3) He got entangled with the government. IMO the tragedy here is that he was just middle-class enough to have a romantic idea of government: while the nobles distrusted government/politics as they sort of looked down on it, and the lower classes distrusted government as it had never done anything much for them, he was of just the right class (his father came from humble origins and had made money in banking) to have patriotic notions of government and all that — he wanted to offer his invention to "the nation" (government), and conversely thought the government "ought to" reward him for it, rather than understanding the practical problems of government officials in funding his project. (The government offered to give his invention back to him, but he refused.)
(4) Possibly as a result of these awards, he seems to have been attached to the idea of being a "smart" person (many examples, e.g. the anecdote quoted in one of the appendices in Sydney Padua's wonderful book, where he refused to judge an award along with Faraday — he thought he "deserved" to be the sole judge) — this also probably got in the way of doing practical things rather than pie-in-the-sky "genius-type" ideas.
I think the government entanglement is probably a big part of the story (he asked them basically "I haven't completed the Difference Engine but I have a much better Analytical Engine that I could implement with more money, what should I do?" and they sat indecisively for twenty years!), and it's interesting to read his accounts (in his memoirs) vs others', e.g. Lord Playfair's account from the same appendix:
> "He was in chronic war with the Government because it refused to furnish supplies for his new machine, the ground of refusal being that he never completed the first. […] Babbage always considered himself a badly treated man, and this feeling at last produced an egotism which restricted the numbers of his friends. […] Babbage, who was delighted with the suggestion, but made it a condition that he alone should be appointed, as a reparation for all the neglect of the Government towards his inventions. Even the association of such a distinguished man as Faraday would take away from the recognition which was due to him."
Anyway Padua's book (The Thrilling Adventures of Lovelace and Babbage) seems very well-researched (I admit I haven't read much of it but read all the appendices in detail; would strongly recommend anyway).
Not -necessarily- true. I don’t typically expect my c to compile on the first try at 100 plus lines. Some languages do seem to be either so forgiving that they”work” without complaint or so structured that they guide you away from errors but are less expressive feeling.
> She thought carefully about how operations could be organized into groups that could be repeated, thereby inventing the loop. She realized how important it was to track the state of variables as they changed, introducing a notation to illustrate those changes. As a programmer myself, I’m startled to see how much of what Lovelace was doing resembles the experience of writing software today.
> So let’s take a closer look at Lovelace’s program. She designed it to calculate the Bernoulli numbers. To understand what those are, we have to go back a couple millennia to the genesis of one of mathematics’ oldest problems.
It does a nice job getting into just enough detail to make you appreciate what she did. If she were alive today, you could imagine her down the hall grinding away on some problem in Rust (I have a feeling she'd have a strong preference for statically typed languages).
However much credit Ada deserves for her programming techniques, to me the thing that always stood out is her ability to see the big picture wrt computation:
> Again, it [Analytical Engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine. Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.
Imagine coming up with this idea in 1842, a whole century before the first actual programmable computers would be built, based solely on a description of a prototype of a mechanical computer. This is hacking extraordinaire.
I agree, this is the thing that stood out to me. There's this kind of amazing leap you have to do to understand how computers do what they do. How does a thing that adds and subtracts numbers paint pictures? Once you grasp that you can transform those things into numbers and then operate on them, the whole world of computation opens up. It's amazing Ada was thinking about this 100 years before computers really existed.
I agree she was a visionary, but take note that by the time she was active, people were already building complex mechanical automata that executed stored programs implemented using cams and gears: https://en.wikipedia.org/wiki/Jaquet-Droz_automata (see also https://en.wikipedia.org/wiki/Maillardet%27s_automaton). I think a small number of very intelligent people would see Babbage's work and Jaquet-Droz and conclude "hmm, if we mash these together with some creativity, it seems reasonable the result would be a programmable automaton capable of painting".
Programmable looms (which used a type of punchcard) such as the Jacquard Loom had existed for a little while - if I recall she specifically referenced this as inspiration for some of her ideas. Not trying to diminish how impressive her work was, but I do believe some form of primitive mechanical computation had already been done for a little while.
Jacquard loom was indeed well-known, and one of the sources of inspiration for Babbage, but it is still fundamentally about designing a system around a specific task - the cards directly encode operations on hooks.
What Ada is saying here is that, once you have a machine that let you do generic operations on numbers, you can use it to do all kinds of non-math stuff so long as you can come up with ways to encode other things as numbers (= finite sets of symbols). This was not at all obvious to other people who worked on the Engine, including Babbage himself.
> In 1975, Paul Allen flew out to Albuquerque to demonstrate the BASIC interpreter that he and Bill Gates had written for the Altair microcomputer. Because neither of them had a working Altair, Allen and Gates tested their interpreter using an emulator that they wrote and ran on Harvard’s computer system. The emulator was based on nothing more than the published specifications for the Intel 8080 processor. When Allen finally ran their interpreter on a real Altair—in front of the person he and Gates hoped would buy their software—he had no idea if it would work. But it did.
So, the real unsung heroes here are the Intel engineers who wrote a spec that was so exact that software running on an emulator written based just on the spec would also run without a hitch on the actual hardware?
In 1976 my first commercial programming job was converting a 8008 emulator written in Fortran to work on a Data General mini as an 8080 emulator, so another programmer writing 8080 firmware for a plotter could debug his code. The emulator source code originated with Intel as something called INTERP/8 8008, and I believe that's what Allen and Gates also used, as suggested in other online posts.
Sure, but it is spoken about in the abstract. I enjoyed the article, but why not at least include "some" of the actual notes she wrote or at least a screenshot?
The difference is in my post it is one of the featured things. In the article that claims to show what the program actually did it is buried in the text.
I'm reminded of a high school programming class where a project partner named variables with the most crude and lewd words he could imagine. Not that I was prudish, but he unsurprisingly never remembered what "butts" was for and somehow never figured out why he kept getting confused by his own code.
For school once I had to write a program called "Poetry Writer". Basically it would take input text, build a linked list(taking in account proceeding and following words) with for each word and output a randomized version of the poem.
I HAD to of course name all of my variables as poets and poems.
So you have "Edgar_Allen gets new The_Road_Not_Taken", all was fine during my tests but for some reason it did not interface well with the code provided by the teacher to do the actual input so I had to take it to the TA for help.
I then learnt why descriptive names, not just comments are helpful. Although, the TA was impressed by my selections XD
...or worked with any mathematicians/physicists/engineers who program. As soon as I saw that, I thought "typical quant".
Like my dad (A chemical engineer) learned to program in FORTRAN, which used to insist variable names were 1 letter and up to 2 digits. He later learned Basic, but his code was still spiritually FORTRAN so the one-letter-two digits thing stuck. I thought that was just him but then much later I went to work on Wall St and had to work with quants who were copying code out of "Numerical Recipes" and it was exactly the same just now in C.
I helped port a physicist's assembly code long ago; variables were named alphabetically in the order encountered in the code, e.g. A, B, ...A1, ..., AA1, etc. up to ZZ23.
Still amazed that the nearly-incomprehensible code (and the port) worked
When I was a kid I used to think that having variables A... without any gaps in the letters meant that I did a good job of thinking out the program in advance.
Not sure which Fortran this refers to. I never used Fortran I, but as I understand it, names were up to 6 characters long, first character alphabetic; names with initial letter A-H and O-Z were REAL, I-M INTEGER (Fortran II added declarations to override the defaults). Dartmouth Basic restricted names to a single letter and an optional digit.
Incidentally, the various Autocode languages of the 1950s in Britain had 1-character variable names.
That’s super interesting. It would have been some mainframe fortran from the 1970s because I remember him bringing me and my brother as children into a university computer lab where he had weaseled some time on a mainframe so he could punch cards. He told me the variable naming thing (and was prone to exaggeration) so it might not even be true - I can’t ask him now as he’s writing pseudo fortran implementations of Newton-Raphson with short variable names in the great computer lab in the sky at the moment.
I guess because mathematical formulas usually use single letters for symbols. It is so common that you end using several different alphabets, lower/upper case and even calligraphic variations. Of course it doesn't scale when you need thousands of symbols and your variables doesn't have well established meanings like "magnetic field" or "pressure". However they are used to it and it's hard to break some mental models after several years of using them everyday. For good or bad some scientific computer languages (like Julia) encourage you to use the Unicode alphabet to align your code with your paper/book.
That naming convention makes perfect sense to the mathematician, so why not? It's why we use `for(int i = i; i < n; i++)` in for loops; its the mathematical sigma sum of values with the same naming convention
A loop counter doesn't carry much semantic weight so it gets a short name. Doing that for important things that deserve a descriptive name is the problem. Maybe passable with literate programming, but even Knuth's code is pretty inscrutable due to transclusions everywhere.
The question to me always was, does it makes sense in the way of, it is intuitivly understandable, or does it only make sense, if it was drilled into you long enough?
While the Americans have encountered first the "DO loops" of FORTRAN (1954), the "for loops" of ALGOL are derived from the earlier use in Europe of "for loops" in programming (actually "für loops", Heinz Rutishauser, 1951), which in turn had been preceded by the use in mathematics of the "for-all" quantifier (Gerhard Gentzen, 1935), which includes an implicit loop (and in many more recent programming languages, starting with Alphard in 1974, it is preferred for the most frequent loops to use a syntax essentially identical to the mathematical notation from 1935, i.e. like "forall X in A do ...").
This insight was all the more remarkable given that Menabrea saw the Analytical Engine primarily as a tool for automating “long and arid computation,” which would free up the intellectual capacities of brilliant scientists for more advanced thinking.
It's funny how enduring this trope about automation is. The same thing is said now of LLMs.
Just funny how the same thing is said while the goalposts keep moving, buttressed by this vague, unspecified notion of "the real creative work" or "the things only humans are good it".
And with current LLMs, and the spectre of even greater automated intelligence, our sphere of unique ability shrinks.
A bit of an aside I have been wondering about is what people called her in her own time. Her name was Augusta Ada King, and she was the Countess of Lovelace. Was it common back then to shorten the title into a last name, or is it only something we have been doing in more recent time?
For the title holder, in this case the Earl of Lovelace, they are often referred to ("styled") simply by the place name. So after William King-Noel was created Earl.of Lovelace he was styled "Lovelace". She would have been styled "Lady Lovelace" in society, and "Countess of Lovelace" in formal contexts
I rabbitholed with that years ago while playing around with Python, probabilities and infinity. That "thing" was discovered by a religious guy who thought it had something to do with God, as a series that created something from nothing, and harassed a famous Calculus mathematician for years to study it. I found it's related to Thomson's Lamp, and I'm convinced it hides the key to a new kind of Math, beyond quantum computing: supertasks.
The deepest I went into the problem was classifying those supertasks like Grandi's, Thomson's, the sum of all natural numbers, and there are others, but they form patterns.
> The Difference Engine was not a computer, because all it did was add and subtract.
The definition of computer is pretty grey for the pre-digital era, and it wasn't turing complete, but is it actually controversial whether it was a computer?
Difference Engine basically implemented one algorithm in hardware, while Analytical Engine was supposed to run a program. I believe that could make the latter one a computer.
The analytical engine wasn't a stored program computer. It most closely follows the Harvard architecture, with instructions read from punch card memory. The analytical engine's claim to fame is that it was the first Turing complete computer to be designed.
A stored program computer refers to the computer architecture where program instructions and data are stored in the same memory. This is also referred to as the Von Neumann architecture.
In contrast, a lot of early computers were built with separate instruction memory like punch cards. This is called the Harvard Architecture. If the instructions were immutable, which they usually were, then things like modifying the program at runtime were not possible.
Concrete examples of this difference is the Harvard Mk 1 and the Manchester Mk 1, the former being a Harvard architecture computer and the latter is a stored program computer or a von Neumann architecture.
"Babbage architecture" would have been much more accurate than "Harvard architecture", because Howard H. Aiken, the designer of Harvard Mark I, has been explicitly inspired by the work of Babbage into making his automatic computer at Harvard, which was intended as a modern implementation of what Babbage had failed to build.
The "Harvard architecture" had nothing to do with Harvard and it was not a novel thing. Having separate memories for programs and for data has been the standard structure for all programmable computers that have been made before the end of WWII, in all countries, and the methods for storing computer programs had been derived from those used in programmable looms and in the much earlier music boxes, which are the earliest programmable sequencers. Like the computer keyboards have a history of millennia since their origin in musical instruments (i.e. organs), the computer program memories have also their origin in (automatic) musical instruments, more than a millennium ago.
That the Difference Engine and Analytical Engine belong on the timeline of computing history isn't particularly controversial, but the Difference Engine itself I've never seen anyone try to claim was a computer (it's a mechanical calculator)--the Wikipedia page doesn't even try to link it directly to the history of computers, you have to go to the Analytical Engine to see the Difference Engine's place in the "history of computing" timeline.
Probably not, it's stated in the TFA, the controversy is because Lovelace was a woman and some people think propping her up is basically a DEI retcon in history, the rest of us don't care. But I don't think it's anything whatsoever to do with actual computers
> All but one of the programs cited in her notes had been prepared by Babbage from three to seven years earlier. The exception was prepared by Babbage for her, although she did detect a "bug" in it. Not only is there no evidence that Ada ever prepared a program for the Analytical Engine, but her correspondence with Babbage shows that she did not have the knowledge to do so.
> Bruce Collier wrote that Lovelace "made a considerable contribution to publicizing the Analytical Engine, but there is no evidence that she advanced the design or theory of it in any way"
The common claims are that Ada Lovelace was the first person to write a computer program, or that she was actually the primary driver in developing the analytical engine. Both such claims fall into the area "DEI retcon" as you choose to phrase it.
Although on a more pedantic note, Babbage wasn't the first person to program a computer either. Computers that aren't Turing complete are still computers. The Jacquard loom is one such example, and unlike the analytical engine it was actually built and put to practical use.
It's always been strange to me, given that Lovelace's program was a note in some documents that she was preparing under Babbage's directions as a scribe of sorts, that so many people assume it was her work and not Babbage's. Based on other details of her life she was clearly a very intelligent and talented woman, but the obsession with attributing the first ever computer program to her seems entirely ideologically motivated.
> Lovelace's program was a note in some documents that she was preparing under Babbage's directions as a scribe of sorts
It was not the case. She was translating someone else’s article, and it does not seem she did it under direction or supervision.
> so many people assume it was her work and not Babbage's.
What she did was quite common. She had ideas about the thing she was translating and thus added them as notes. All fairly straightforward.
> the obsession with attributing the first ever computer program to her seems entirely ideologically motivated.
To me the obsession that some people (not you, but some definitely do and use the same arguments) have with bringing her down is entirely ideologically motivated. She was recognised for a long time, and while there are discussions about exactly who was first and such (as there always are when discussing History), her role was mostly uncontroversial. Also bear in mind that calculator and then programmer were women’s jobs until some point in the 2nd half of the 20th century. Having a woman write code was not controversial before the establishment of the bro culture.
> To me the obsession that some people (not you, but some definitely do and use the same arguments) have with bringing her down is entirely ideologically motivated
There really aren't more of those than there are people trying to give more credit to those women than there is evidence for. In the end there are foul play from both sides, but currently one side is dominating academia so there is much more need to argue against that side than the other.
If you believe that all arguments must be evenly matched, to the point that you have an obligation to bolster the weaker side, you’re signing up for supporting some despicable ideas.
I understand and support steel-manning arguments in order to test one’s own convictions. But applied in actual debates with actual consequences, at some point you end up as the kneejerk contrarian that nobody takes seriously, and that undermines the truth seeking aspect of discussion.
Seriously. As the article states, while everyone else was like "Wow cool we will make a machine that makes calculating things easier"
Meanwhile Ada over here going "Oh shit this can do literally anything that can be done by steps of math. Someday machines following that formula will make music"
Ada is not the first programmer. Ada is the first computer scientist. She understood the ramifications of what we would eventually call "turing complete" systems, and understood the value of "general purpose" in a general purpose computer, and seemingly understood that more than just numbers could be represented and calculated in a computer.
Funny indeed.Ada Lovelace has been persistantly recognised for a very long time, but has never been held up as a sufferget type mayrter, as by all accounts, she enjoyed herself out on the bleeding edge and is still making people uncomfortable 150 years after not fitting into any stereotypes then.
Its clear from the footnotes that, whatever crowd
around Babage and Lovelace, grasped the possibilities.
Also interesting is that durring the apollo moon mission, the memory modules for the guidance computers were crafted by some of the last lace makers, working by hand, to survive the introduction of the jaquard looms and there punch cards.
The parent asks about the Difference Engine. Lovelace wrote about the (more powerful) Analytical Engine. Nobody is denying the Analytical engine was a computer.
I'm not sure I have a direct answer, but I agree something shouldn't be called a computer if it just does a one-shot, fixed-length calculation before requiring further human intervention. To be a "computer", and be associated with that general conceptspace, it should be Turing-complete and thus capable of running arbitrarily long (up to the limits of memory and hardware rot).
Earlier comment expressing annoyance at a mislabeling:
Separate comment to address a subtlety that comes up a lot:
Often you'll hear about fully homomorphic encryption (FHE) being Turing-complete. But you can't actually have a Turing complete system with variable-run-time loops that's homomorphically encrypted, because that leaks information about the inputs.
When they say FHE is Turing-complete, what they mean is that you can take an arbitrary program requiring Turing completeness, then time-bound it, unroll it into a fixed-length circuit, and run that homomorphically. Since you can keep upping the time bound, you can compute any function. So the system that translates your programs into those circuits, with no limit on the bound you set, could then be called Turing-complete -- but you couldn't say that about any of those circuits individually.
I don't think there is anything controversial here- the Difference Engine was a calculator that could only do a predefined set of hardwired computations, the Analytical Engine a true turing complete computer.
Is an early 20th century mechanical desk calculator a computer? There is no consensus on definition but for me, a computer follows a program. Maybe even only one fixed program. But a program. If there is no stepping through a program it is not a computer.
Does the iterative method used by the difference engine constitute a program?
Regarding the dispute of whether Ada wrote the first program.
On a related topic, I believe that credit for the invention of hashing should go to certain scholars in China.
The invention of a data structure and algorithm must not be confused with its first implementation in electronic computers.
For at least a century before the computing revolution, hashing was used in China to find characters in dictionaries.
First, a character is examined, and reduced to a numeric code according to steps that constitute a hashing function. For instance the Four Corner Code.
The code is then used to find a page or section of the dictionary by direct access: the mapping between codes and dictionary sections is straightforward. The character is then found within a small bucket of collisions by a linear search.
She certainly didn't write the first code, except if you think that babbage could have worked for year on his machine without ever writing a code. However the first program published that was designed to work on what is considered as a computer is under her name. As their names say, algorithms and code are much older
The Four Corner Code is not really a hash, just an index. Do you have evidence of the use of unnatural, random-looking "hash" functions for the sake of an even distribution? That's the key insight that makes a hash table what it is.
The 4cc gets a decent distribution in that you don't have to examine very many collisions to find the character you're looking for (or conclude it's not found).
A poor distribution is an obvious bug in hashing; if you don't suffer from that bug, you don't have to do anything. If you have the bug, it's obvious you have to change your hash calculation to avoid it. The developers of 4cc may have struggled with bugs where they had buckets that were too large for efficient searching.
Convoluted hashing functions are not always used for hash tables. When hash tables are pointers (e.g. object identities themselves are used as keys to associated objects with additional properties), sophisticated hashing functions are not needed; e.g. simply extracting a few bits of the pointer, avoiding the lowest bits (which might all be zero due to alignment!).
I believe that the 4cc dictionaries hit upon the key insights of hashing: calculating a numeric key from an object which then directly identifies a small search bucket.
The Four Corner Code abandons semantics like radicals. Codes are assigned according to certain stroke patterns in the four quadrants of the character, without regard for their semantic role. The inventors hit a key insight there: that any way of calculating a hash code is valid as long as it can be consistently followed, and leads to short searches. The function can look at meaningless fragments of the object (exactly like when we take the middle bits of a pointer). A character's etymology need not play any role in how it is digested. Whereas in the radical methods, you have to know that for instance 火 and 灬 both mean "fire" and are understood as the same radical #86. So in some sense, the predecessor methods like radical indexing may have been almost-hashing. It's hard to argue that 4cc isn't.
> A poor distribution is an obvious bug in hashing; if you don't suffer from that bug, you don't have to do anything.
Right, but if you don't have and solve that problem then what you have made isn't a hash table. Often you don't need a hash table - if you have something that already has a nice distribution, you can use a simpler data structure (like, IDK, a radix tree) and get all the properties you wanted.
> The inventors hit a key insight there: that any way of calculating a hash code is valid as long as it can be consistently followed, and leads to short searches.
If they did, then I would agree you're right. But do we know that they did? Or might they have seen it as just a different way of considering radicals? (E.g. did they ever try indexing anything else that way, not just characters?)
Note that a radix tree and hash table are not mutually exclusive. A radix tree is a way of representing a sparse table. That could be used as a hash table.
There's a trade off there because if the table is very sparse, and we're using hashing, we could just shrink the table so as not to have it so sparse, and then just make it a regular array.
The key aspect of the four corner code is that it mashes together completely unrelated characters. There's no meaningful index to it. It's not easy to look at a four corner code to figure out the list of characters it aliases for.
Great article, does anyone have a breakdown of the programs babbage wrote? It always seemed odd that Lovelace was the first programmer, suggesting Babbage created a machine without thinking how his it could be used.
Lovelace seems to be the first to use loops. Babbage clearly created something first, but without any looping they were by nature much less complex, and so you could make an argument they don't count as programs ("hello world" would only count as a program because the library print function is likely to have some loop in it by this argument). Of course not all of Babbage's programs survive, it is entirely possible Babbage did have loops in some earlier programs that Ada knew of when she wrote her programs. Ada had much correspondence with Babbage, so it is possible she wrote programs before the ones we know of, but they are lost as well, who knows. Bottom line Ada and Babbage were working together (though countries apart) and so would have been thinking about what this new thing could do while it was in design the phase.
You can make whatever argument you want about first programmer. Ada was a smart person who clearly understood what this machine could do and had visions of the future of this machine. Even if you decide she wasn't actually the first programmer she is worth knowing about as an early pioneer.
Ada was clearly incredibly intelligent. I can't source the original texts for Babbage's program, but it does seem that he was aware of loops, and might have implemented them:
"In the absence of other evi-
dence I have had to adopt the minimal default assumption that both the operation and variable cards can only be turned back-
ward as is necessary to implement the loops used in Babbage’s sample programs cited in Ada Lovelace’s notes (originals in L series notations)"
Babbage also wrote more than twenty programs that he never published.[19] So it’s not quite accurate to say that Lovelace wrote or published the first program, though
This is excellent and makes a good case for Ada being the first programmer, many thanks for sharing. I've also stumbled across the science museum Babbage archive:
The real mark of a non-trivial program is that it doesn't work on the first try.
It's incredible how Babbage, frustrated that the mass production precision machining technology necessary to make his simple engine work didn't exist yet, decides that the best way forward is to design a new system an order of magnitude more complex and then go to Italy to find more advanced manufacturing somehow.