Hacker News new | past | comments | ask | show | jobs | submit login
Was the Internet created to survive a nuclear strike? (2022) (siliconfolklore.com)
224 points by edward 84 days ago | hide | past | favorite | 141 comments



I don't think that the Internet was created to survive a nuclear strike, but I think we can say that it was _designed_ to survive a nuclear strike, that was one of the reason that packet switching was invented (compared to the traditional, at the time, circuit switching).

The idea of packet switching as a way make a communication network more robust came to Paul Baran a few years earlier, and while it was not the basis of Arpanet, it probably influenced it. Wikipedia [1] is not the best of the sources, but it sums it up nicely:

"After joining the RAND Corporation in 1959, Baran took on the task of designing a "survivable" communications system that could maintain communication between end points in the face of damage from nuclear weapons during the Cold War. Then, most American military communications used high-frequency connections, which could be put out of action for many hours by a nuclear attack.

[...]

After proving survivability, Baran and his team needed to show proof of concept for that design so that it could be built. [...] The result was one of the first store-and-forward data layer switching protocols, a link-state/distance vector routing protocol, and an unproved connection-oriented transport protocol. Explicit detail of the designs can be found in the complete series of reports On Distributed Communications, published by RAND in 1964.

[...]

The Distributed Network that Baran introduced was intended to route around damage.

[...]

In 1969, when the US Advanced Research Projects Agency (ARPA) started developing the idea of an internetworked set of terminals to share computing resources, the reference materials that they considered included Baran and the RAND Corporation's "On Distributed Communications" volumes. The resiliency of a packet-switched network that uses link-state routing protocols, which are used on the Internet, stems in some part from the research to develop a network that could survive a nuclear attack."

[1] https://en.wikipedia.org/wiki/Paul_Baran


I've noticed that the closer I am to historical events, the more wrong reporting of those events tends to be. I have a neighbor who worked at RAND in the 60s and will ask him about this next time we meet.

Worth noting perhaps that many (all?) technical innovations are the result of some underlying technology maturing to the point that it can be applied to a problem. In this case, I bet that nobody liked the fragility and brittleness of circuit switched networking, but in order to make a packet switched network you need small fast computers that are cheap enough to deploy as network nodes. These appeared : minicomputers. The first ARPANet nodes were minicomputers running routing software. In fact the Internet used regular computers as routers into near modern history (IBM RISC machines iirc were deployed at the DS3 upgrade). So PSN is the result of a) people sitting around wishing they could have a PSN, and b) the technology to actually realize that becoming practical. There's no eureka moment.


It’s impossible to know everything that’s happening at the same time. So while ARPANET was the 2nd packet switched network behind NPL a journalist wouldn’t have a clue.

The original conception of Message Blocks was for routing around a damaged network, but the term for Packet Switching was actually a means for multiple users to share a single connection. ARPANET included many ideas from NPL but was also its own thing.

So who invented Packet Switching depends on what parts of it you consider critical.


> I've noticed that the closer I am to historical events, the more wrong reporting of those events tends to be.

That is not just for historical events, but for anything you are familiar with.

Michael Crichton coined the term Gell-Mann Amnesia Effect

https://theportal.wiki/wiki/The_Gell-Mann_Amnesia_Effect


> IBM RISC machines iirc were deployed at the DS3 upgrade

You recall correctly. Info/pictures here[0].

0 - https://www.rcsri.org/collection/nsfnet-t3/


Interesting, thanks for the link. Do you know anything more about the router cards? It says they had intel i960s that ran AIX, but I wonder if it was just the RS/6000 that ran AIX, and the boards ran some custom firmware.


There isn't much published information about these systems to my knowledge. This[0] report from ANS-Net[1], though, indicates to me that AIX was very much directly involved in the forwarding part of the router architecture:

  AIX Kernel Radix Trie problem

  3+ overlapping routes with different masks

    abc.def/8
    abc.def.0/10
    abc.def.0/12

  Fixed in AIX build coming within two weeks
While it would certainly be possible (and is a common router architecture now) to have a central forwarding database and distributed line cards, it doesn't really make any sense to use a kernel trie to store routes for such an architecture, and if the central rs/6000 was switching every packet, it wouldn't - at the time - have been able to support a single t3, let alone the kind of port density described in the original article.

So, I guess in summary, I can't point you to definitive proof that those boards were running AIX, but it seems extremely likely to me.

0 - https://www.ietf.org/proceedings/29/ops/netstat.becker.slide... 1 - A network operator that was subsequently purchased by AOL.


The i960 cards ran an independent AIX kernel as well, housed in a POWER RS6k running AIX.


Not really knowing anything about software portability between different computer architectures or the intricacies of high-speed networking, at first I thought that was kind of strange.

But I checked and it looks like by the time these cards we created, AIX had already been ported to the IBM's ROMP, 370, and POWER processors, plus the 386, so AIX would seem to be fairly portable. I guess it would be easier to port it to the i960, than to do something from scratch. From marcus0x62's link, it looks like very few ( < 100?) of these boards were made.


AIX was for a time IBM's brand name for UNIX environments so it covers some rather distinct products. The version that runs on ROMP is very distinct (one of the earlier commercial microkernels) from the S/370 and i386 version (which, for a time were similar, then there was an ESA/390 version that is not at all), which are both very far removed from the version that continues today on POWER. Sauer was heavily involved in the stuff before POWER and is authoritative on the early history https://notes.technologists.com/notes/2017/03/08/lets-start-.... I can extrapolate a bit more if necessary.

I am unsure of what exact kernel strain wound up on the i960 cards, suffice to say any of them could have been called into this service since a well contained UNIX port is something that can be done by relatively few people.

IBM provided the IBM RT with special cards as the T1 implementation for NSFNet, and then the RS/6000s were used later and especially for higher speed T3 and FDDI nodes.

While the original NSFnet nodes and cards would be rare indeed, IBM commercialized this all as the 6611 Network Processor family.. which ranged from tiny PowerPC-only machines (based on the 7011) to custom chassis complete with the i960 cards. These sold from the early 1990s to mid-late 1990s and were not uncommon. Vanilla RS/6000s were also common in that era as networking multitools and were plenty good at it.

* https://www.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ss...

* http://ps-2.kev009.com/basil.holloway/ALL%20PDF/sg245000.pdf

* http://www.bitsavers.org/pdf/ibm/6611/


Appreciate the information / links, that's why I like Hacker News. Now, instead of doing something boring, like watch the Olympics, I'm going to read up on some old networking processors.


For those who are interested in that sort of thing, Baran's full 11 publications outlining the design goals and principles of packet-switched networks in January of 1964 are available from RAND as downloadable PDFs:

<https://www.rand.org/about/history/baran.html>

The first document introduces the basic problem:

Let us consider the synthesis of a communication network which will allow several hundred major communications stations to talk with one another after an enemy attack. As a criterion of survivability we elect to use the percentage of stations both surviving the physical attack and remaining in electrical connection with the largest single group of surviving stations. This criterion is chosen as a conservative measure of the ability of the surviving stations to operate together as a coherent entity after the attack. This means that small groups of stations isolated from the single largest group are considered to be ineffective.

<https://www.rand.org/pubs/research_memoranda/RM3420.html>

"Attack" isn't defined, but it's clear that resilience against broad assaults was a key consideration from the very beginning.


> "Attack" isn't defined, but it's clear that resilience against broad assaults was a key consideration from the very beginning.

It was a consideration for Baran's work, but not for ARPAnet:

From chapter two of Wizards:

> Roberts also learned from Scantlebury, for the first time, of the work that had been done by Paul Baran at RAND a few years earlier. When Roberts returned to Washington, he found the RAND reports, which had actually been collecting dust in the Information Processing Techniques Office for months, and studied them. Roberts was designing this experimental network not with survivable communications as his main—or even secondary—concern. Nuclear war scenarios, and command and control issues, weren’t high on Roberts’s agenda. But Baran’s insights into data communications intrigued him nonetheless, and in early 1968 he met with Baran. After that, Baran became something of an informal consultant to the group Roberts assembled to design the network. […]

* https://www.goodreads.com/book/show/281818.Where_Wizards_Sta...

* https://en.wikipedia.org/wiki/Larry_Roberts_(computer_scient...


Influences can be manifold and subtle. What's clear is that in 1964, as packet-switched networks were first being considered, resilience against "attack", however defined, was a key consideration for one significant set of innovators. I suspect some of that logic was incorporated into ARPANET's eventual design, whether the nominal head designer was aware of this or not.

One underappreciated aspect of complex projects is how different goals and intentions may exist simultaneously, and how much active participants even at a high level may not be awarer of this. A friend had a uni professor who'd been part of the Glomar Explorer scientific mission which served as a cover for Project Azoran, the clandestine recovery of a Soviet nuclear submarine. The professor was unaware of the actual mission until well after the mission had concluded, when he read about it in the newspaper.

<https://en.wikipedia.org/wiki/Project_Azorian>

I tend to strongly discount personal-experience denials of covert or secondary mission roles by people otherwise connected with an activity (e.g., company employees, contractors, government workers, contractors, etc.). It's not that these are never, or even mostly incorrect. It's simply that for a large enough project, some secondary goal might well exist without the conscious awareness of many of those involved.

And again, in the case of Baran, we have the receipts.


> but I think we can say that it was _designed_ to survive a nuclear strike

There is a lot more to the design of 'the internet' than the selection of a packet switched protocol. The early boxes were not hardened in any way and were intended to support computer timesharing amongst academic researchers. The DoD's C2 providers themselves rejected the concept of a decentralised packet-switched network because 'it would never work'. Which is why the relevant theoretical papers were sitting on the shelf and available when ARPANET was designed


> but I think we can say that it was _designed_ to survive a nuclear strike

On what basis? What is the distinction between being "created" to survive a nuclear strike, and being "designed" to do so?

> that was one of the reason that packet switching was invented (compared to the traditional, at the time, circuit switching).

Yes, but I don't think it's a relevant one. Baran's papers kinda-sorta-maybe had some influence on ARPANET, but ARPANET mostly got packet-switching (and certainly the term "packet") from Donald Davies. If you look at the actual layout of ARPANET it wasn't very survivable (not much redundancy in the links) [0], compared to Baran's proposal [1]. Internetworking and "the Internet" as we know it came much later and was way beyond the point where Baran had any influence.

[0]: https://commons.wikimedia.org/wiki/Category:ARPANET_maps [1]: https://personalpages.manchester.ac.uk/staff/m.dodge/cyberge...


> I don't think that the Internet was created to survive a nuclear strike, but I think we can say that it was _designed_ to survive a nuclear strike, that was one of the reason that packet switching was invented (compared to the traditional, at the time, circuit switching).

That cannot be said unless you can cite sources saying so. It's been a little while since I read Wizards, but I don't remember any mention of nuclear war until Baran's work at RAND is mentioned, which is a couple of chapters in (IIRC).

For Licklider et al it was all about research, collaboration, and resource sharing.

> The idea of packet switching as a way make a communication network more robust came to Paul Baran a few years earlier, and while it was not the basis of Arpanet, it probably influenced it.

Baran's work was used in things like queuing theory, but work was already underway on ARPAnet before Baran (and Davis in the UK) was roped in. In fact it was Davis that pointed out Baran's work to the ARPAnet folks:

* https://en.wikipedia.org/wiki/Donald_Davies

This is all covered in Where Wizards Say Up Late.


1) Robustness in the face of nuclear disaster was one of the drivers of packet switching.

2) Not the only one, though.

3) It was such a good idea that it grew a life of its own, and no one talked about nuclear war anymore.


1) Only perhaps for Baran. Didn't enter into the equation for, e.g., Davies in the UK.

From the Prologue of Wizards:

> Bob Taylor, the director of a corporate research facility in Silicon Valley, had come to the party for old times sake, but he was also on a personal mission to correct an inaccuracy of long standing. Rumors had persisted for years that the ARPANET had been built to protect national security in the face of a nuclear attack. It was a myth that had gone unchallenged long enough to become widely accepted as fact.

> Taylor had been the young director of the office within the Defense Department’s Advanced Research Projects Agency overseeing computer research, and he was the one who had started theARPANET . The project had embodied the most peaceful intentions—to link computers at scientific laboratories across the country so that researchers might share computer resources. Taylor knew theARPANET and its progeny, the Internet, had nothing to do with supporting or surviving war—never did.Yet he felt fairly alone in carrying that knowledge.

> Lately, the mainstream press had picked up the grim myth of a nuclear survival scenario and had presented it as an established truth. When Time magazine committed the error, Taylor wrote a letter to the editor, but the magazine didn’t print it. The effort to set the record straight was like chasing the wind; Taylor was beginning to feel like a crank.

* https://www.goodreads.com/book/show/281818.Where_Wizards_Sta...

I would think that Taylor of all people who know:

* https://en.wikipedia.org/wiki/Robert_Taylor_(computer_scient...

From chapter two of Wizards:

> Roberts also learned from Scantlebury, for the first time, of the work that had been done by Paul Baran at RAND a few years earlier. When Roberts returned to Washington, he found the RAND reports, which had actually been collecting dust in the Information Processing Techniques Office for months, and studied them. Roberts was designing this experimental network not with survivable communications as his main—or even secondary—concern. Nuclear war scenarios, and command and control issues, weren’t high on Roberts’s agenda. But Baran’s insights into data communications intrigued him nonetheless, and in early 1968 he met with Baran. After that, Baran became something of an informal consultant to the group Roberts assembled to design the network. […]

* https://en.wikipedia.org/wiki/Larry_Roberts_(computer_scient...


The Wizards book came along 20+ years after everything happened, so all Markoff could do was interview a few people.

I said it was ONE of the motivations; not the only one. And Bob Taylor was only one of the people involved; there were lots of others. Why a bureaucracy decides to support something is pretty much impenetrable and lost to history.


> The Wizards book came along 20+ years after everything happened, so all Markoff could do was interview a few people.

Are there any interviews or documentation from ARPA or the early days of ARPAnet that do say it was about nuclear war survival?

Licklider, Taylor, Roberts, Davies in the UK: no one else thinking about things in that way from all the reports and interviews I've seen. The only one that perhaps seems to have had an interest in it seems to have been Baran, and he came in later when the ball was already rolling.

> I said it was ONE of the motivations; not the only one.

Okay. Where are the documents and/or interviews of those involved in the decision-making and/or implementation process stating it was one of the motivations?


I don't know why you're arguing.

Obviously we're talking about a period where documentation is sparse or non-existent, and most of the players are dead. Why is this important to you?


> Obviously we're talking about a period where documentation is sparse or non-existent, and most of the players are dead. Why is this important to you?

Because if you are going to make a claim it'd be useful if you had a citation for it. Because we're as close to the events that occurred as is possible, and it's only going to get 'worse' as time passes, so we should try to get right now.

You say that "all Markoff could do was interview a few people". The phrase "few people" is doing a lot of heavy lifting: two of those "few" were Herzfeld (who ran ARPA at the time) and Taylor (who was in charge of getting ARPAnet going). If they didn't know why ARPA why created ARPAnet then who else would?

If there were/are supporting documents, interview, etc, which show that surviving a nuclear attack was a motivation, they should be shared.

I don't necessarily care what the motivations were, but I'd rather not folks repeating hand-wavy claims that don't seem to have any supporting documentation for. That was what the linked to web page is about in the first: trying to trace and dispel an apparent myth.


Absence of evidence is not evidence of absence.

I have no obligation to do a lot of research for you which, I'm sure, would make no impression on you anyway.


> I have no obligation to do a lot of research for you which, I'm sure, would make no impression on you anyway.

"What can be asserted without evidence can also be dismissed without evidence." — Genghis Khan (or https://en.wikipedia.org/wiki/Hitchens%27s_razor )


OK. I'll just dismiss your claims, too, and we're done.


> OK. I'll just dismiss your claims, too, and we're done.

LOL. I'm the one quoting Wikipedia and books and putting links to references in his posts. You're the one that has not put a single citation in anything that he's written in this sub-thread.


You're making a historical claim and he's disputing it. Is your counterargument is that it was a long time ago so we can just make up whatever we want?


No,the "counter argument" as you put it, is that if you only "know" things that come with a link, you have no value over an LLM


> No,the "counter argument" as you put it, is that if you only "know" things that come with a link, you have no value over an LLM

As opposed to knowing things without evidence and trying to convince people of a position without providing evidence? Is that how history is supposed to be done, without evidence?

As least with a link you can gauge the quality of the reference and its sources.


I think it's also worth noting that the physical infrastructure of certain portions of the circuit-switched AT&T Long Lines network (which also carried the AUTOVON network for the DoD) were designed, or at least attempted to be designed, to survive a nuclear strike. Theoretically traffic would have been manually re-routed around destroyed areas in a post-strike scenario.

There's certain underground bunker sites on the historical L4 transcontinental coaxial cable routes which were designed with equipment mounted on springs, massively thick concrete walls, decontamination showers, and so on. Other mountain top and flat land area microwave point to point relay sites were designed to take a certain amount of overpressure and possibly would have survived nearby smaller nuclear weapon strikes (though obviously not a direct hit).

Much of this pre-dates 1969 and the concept of the ARPANET. If you dig into things like what telecom services/links supported the NIKE missile batteries built around major cities and so forth. And what telecom services/medium to long distance links supported the data feeds from regional radar sites into the SAGE direction centers.

Here's a fairly typical example of the more costly site of site, which you can see the photos of under construction and how much extremely reinforced underground concrete is involved.

https://coldwar-ct.com/Home_Page_S1DO.html


Perhaps corroborating your point, excerpted from Waldrop's 'The Dream Machine':

"Why did ARPA build the network?” Lukasik asks. “There were actually two reasons. One was that the network would be good for computer science.""

...

"But there was also another side to the story, which was that ARPA was a Defense Department agency. And after Eb [Rechtin] came in, defense relevance became the dominant notion. Everybody was writing relevance statements. "

...

"So in that environment, I would have been hard pressed to plow a lot of money into the network just to improve the productivity of the researchers. The rationale just wouldn’t have been strong enough. What was strong enough was this idea that packet switching would be more survivable, more robust under damage to the network. If a plane got shot down, or an artillery barrage hit the command center, the messages would still get through. And in a strategic situation—meaning a nuclear attack—the president could still communicate to the missile fields. So I can assure you, to the extent that I was signing the checks, which I was from nineteen sixty-seven on, I was signing them because that was the need I was convinced of.”

Waldrop, M. Mitchell. The Dream Machine (p. 273). Stripe Press. Kindle Edition.


Yes, quite. This is a very silly article which ignores a lot of the real history in favour of a cutesy top up of early 1990s nostalgia - which was a good 20-30 years after the events that really matter.

Saying "The Internet isn't ARPANET" is ridiculous. Of course it isn't. ARPANET was an academic research project with a mix of defence and open R&D requirements. The Internet is a collection of extra layers of commercial development on top of some of that R&D.

Academic research projects are rarely hardened because the point of the project is to investigate possibilities, not to spend hundreds of billions building a physically bomb-proof network that's useless because the core tech doesn't work.

When the Berlin Wall came down the goals changed, but the core concept of distributed scalable robustness is still very much there today. Of course now we have too many choke points, so it's not as robust as it could be. But if someone cuts a cable packets will still find a longer, slower way around as long as the bandwidth is there.


In the case of all out nuclear war with the USSR the choke points would be the 3000 or so targets that would be hit by Soviet nukes.


In my opinion packet switching is an idea that would have been inevitable once the technology to support it is in place. The real important idea of the Internet is the insight that the envelope should be independent of the data it contains.

I say this because I work with specialized equipment that uses RS-232 serial protocols for communication, and despite many decades of examples available of "the right" way to do it (e.g. OSI model), engineers continue(d) to not understand this lesson and design ad hoc protocols that don't respect this division of concerns and which suffer for it. Even in IP protocols designed to modernize this to wrap the RS-232 serial packets in internet packets, they repeat the same mistake(s). That is, in the midst of having to deal with a more complicated problem for serial protocol to IP protocol conversion because the original format didn't clearly distinguish envelope from data, they compound the problem by mixing levels in the new protocol.

For example, writing IPv4 into the standard for a proprietary an Application or Session level protocol even at the same time as our network is banning IPv4 and requiring IPv6, but the protocol is not even defined for IPv6 addresses. When it should be agnostic to IP level concerns entirely. (The engineers said, while designing the system, "well .1 is going to be this piece of equipment, .2 is going to be the other side, if it's blah blah blah it's .3, etc.")


People keep repeating this mistake at all layers of the stack because they fail to see different layers as managing different sets of concerns and want to treat it as a monolith.

A very recent and high level example of this is how the Matrix protocol was architected. By defining all the message formats in terms of JSON-over-HTTP it makes it difficult to only use part of the protocol and not all of it, and makes it difficult to use it over alternative transports since the assumptions of HTTP idioms are baked all the way down.


This is not true. For instance, https://matrix.org/blog/2021/06/10/low-bandwidth-matrix-an-i... shows how you can (very easily) swap http+json for coap+cbor as an alt transport for Matrix.


This is a great example of what I mean. They had to swap for formats that were designed to be equivalent to HTTP and JSON in order to not have to rewrite large parts of the protocol. And that project was just a prototype that was never productionalized to my knowledge.


> The idea of packet switching as a way make a communication network more robust came to Paul Baran a few years earlier,

I remember back in the days when the internet was first starting to happen, seeing comments about packet switching. The gist of what I recall was that there was lots of dark fiber, and folks like AT&T were trying to prevent it from being used and force slow/expensive packet switching on us, billing us per-packet or something.

it is hard to dig up any article about this however.


When was this? I would have thought the internet was well established before fiber optic cabling was widely used.


AUTODIN preceded it, but ARPANET won out over AUTODIN II.



For all intents and purposes the Internet was created by and for US military not for public utility, and one of the main reasons for its creation, if not the sole reason, is to be able to withstand large scale nuclear attack. It's not created for remote feeding the cats with the IoT at home while you are away. It's easily the top ten invention of 20th century.

It's not only Internet, FFT also was created to detect illegal nuclear testing and it's included the top ten algorithms of 20th century [1],[2]. There's no shame of admitting the facts.

If not for military purpose Internet will probably never ever see the light of day. The Internet packet switching end to end network precursor and predecessor in France proposed and implemented by Louis Pouzin, famously called the 4th man of the Internet, who coined the word "datagram" the very fundamental concept in packet switching, lost his proposal to other communication technology for France nation wide networks implementation [3].

The original inventors of the Internet proposed the idea to AT&T, the US communication behemoth and monopoly at the time laughed at the packet switching idea saying that it will not work and does not make any economic sense. The only reason Internet created and survived is because of the military applications and as any military application the main objective is to maintain and sustain command and control where Internet is more than capable of. I've once read someone simulated 99% infrastructure demolition (as in nuclear attack) and Internet communication managed to survive intact. Cannot recalled the article now, perhaps this is where ChatGPT becomes handy. The fact that the US military maintained a separate military network (similar to Internet) from the public Internet after the Internet gone public and its popularity increased, provided the hints why it's so important for the US military.

[1] Great algorithms are the poetry of computation:

https://www.andrew.cmu.edu/course/15-355/misc/Top%20Ten%20Al...

[2] The Algorithm That Almost Stopped The Development of Nuclear Weapons:

https://www.iflscience.com/the-algorithm-that-almost-stopped...

[3]Louis Pouzin:

https://en.wikipedia.org/wiki/Louis_Pouzin


> For all intents and purposes the Internet was created by and for US military not for public utility, and one of the main reasons for its creation, if not the sole reason, is to be able to withstand large scale nuclear attack.

And yet Charles M. Herzfeld (who was ARPA director at the time) and Robert Taylor (who was in charge of getting ARPAnet going) say otherwise: it was to link computing centres across the country so that resources could be shared and collaboration would be easier.

If Herzfeld and Taylor didn't know why ARPA created ARPAnet then who else would?

The entire linked to article is about tracing the source of the "able to withstand large scale nuclear attack" original story myth:

> However, the documentation is voluminous and the people who were in the room have all given a consistent story about how it was to build a network for time-sharing of expensive computers and better collaboration.


Interestingly Charles M. Herzfeld once chaired the Nuclear Weapons Council and the Intelligence R&D Council.

According to Wikipedia, apparently "Taylor had convinced ARPA director Charles M. Herzfeld to fund a network project earlier in February 1966, and Herzfeld transferred a million dollars from a ballistic missile defense program to Taylor's budget". This budget probably equivalent to tens of millions USD in today's money. The narrative "to link computing centres across the country so that resources could be shared and collaboration would be easier" using a large chunk of budget from ballistic missile systems that could carry nuclear warheads that were distributed across the country does not make sense to me.


Packet switching was invented by an American and Brit independently IIRC.


Yes, the linked article talks also about Donald Davies, which influenced more directly Arpanet.

But the point about Paul Baran is that for him packed switching was a way to make communications more resilient in face of nuclear bombing (and other things) and he had at least some influence on the birth of the Internet, so if you ask me "what is the relation between the Internet and a nuclear strike" my first answer is "Paul Baran"


> But the point about Paul Baran is that for him packed switching was a way to make communications more resilient in face of nuclear bombing (and other things) and he had at least some influence on the birth of the Internet […]

From chapter two of Wizards:

> Roberts also learned from Scantlebury, for the first time, of the work that had been done by Paul Baran at RAND a few years earlier. When Roberts returned to Washington, he found the RAND reports, which had actually been collecting dust in the Information Processing Techniques Office for months, and studied them. Roberts was designing this experimental network not with survivable communications as his main—or even secondary—concern. Nuclear war scenarios, and command and control issues, weren’t high on Roberts’s agenda. But Baran’s insights into data communications intrigued him nonetheless, and in early 1968 he met with Baran. After that, Baran became something of an informal consultant to the group Roberts assembled to design the network. […]

* https://www.goodreads.com/book/show/281818.Where_Wizards_Sta...

* https://en.wikipedia.org/wiki/Larry_Roberts_(computer_scient...

Baran's influence was perhaps in theory and algorithms, not in intent/use.


For a couple of decades there was a totally independent network stack in the UK (JANET) with its own equivalent of RFCs and so on. History gets written by the victors.


a nuclear attack could be natural selection for the internets evolution honestly.


Interesting story but I have a bit of anecdotal evidence to share. Back when I was a Freshman at UIUC in 1989, I was given a campus tour and told that one of the buildings there was designed to collapse outwardly in order to protect the equipment in the basement. That equipment was a national computer network (not yet called the internet!)

So at the very least, the origin of this story predates 1991 by at least two years.

I don't recall the name of the building but here it is on Google maps.

https://www.google.com/maps/@40.106201,-88.2268272,3a,75y,91...

Edit: It's not clear from my original comment but the reason for collapse would presumably be a nuclear strike. I remember this because this was a time when we grew up with a constant fear of a Russian nuclear strike and I couldn't help but wonder why anyone on earth would want to nuke Champaign.

Edit: Ah, here we go! It is the Foreign Languages Building (FLB), later renamed. I remember having to trudge here at 7ams on snowy winter days to listen to Japanese language cassettes.

https://uihistories.library.illinois.edu/virtualtour/maincam...

Edit: And here's a contemporary article about the FLB, which also cited some of the crazy rumors about this building.

https://imgur.com/HXenjnt.png


I was going to share this story but you beat me to it. They're still claiming this in tours ~2017.

The building was called the Foreign Languages Building until very recently and is now called the Literatures, Cultures & Linguistics Building.

Relevant info from the UIHistory site:

"Located on the site of the former Old Entomology Building, ground was broken on the Foreign Language Building (FLB) on December 18, 1968.

A popular myth is that the building's distinctive architecture was a result of its being designed to house a supercomputer on campus called Plato. The building was supposedly designed so that if it was bombed, the building's shell would fall outwards, protecting the supercomputer on the inside. It is also rumored that the building's interior layout was a result of trying to confuse Soviet spies and prevent them from stealing secrets from the supercomputer.

In reality, the building's architecture is not actually all that unique and was a popular style of the day. In fact, just a few blocks to the west, one may find the Speech and Hearing Sciences Building, which a 2-story clone of the building. Plato itself was real, but refered not to a secret government program, but rather to the first "modern" electronic learning system, the forbearer of course software like WebCT and Mallard. The mainframe computer that ran the Plato system was located in north campus, in a building which used to reside on the west side of the Bardeen Quad." [0]

[0] https://uihistories.library.illinois.edu/virtualtour/maincam...

Hilarious that the myth extends to the interior design - the basement really is a maze the first few times you visit.


Plato was in fact real... I used it many times! Looking back, it was pretty impressive technology for its day but was quickly becoming obsolete. I hated having to walk all the way to campus to get some physics units in that I missed.

I vaguely seemed to recall that sometime around the Gulf war, I was able to modem in and connect remotely. Shortly after, I stopped getting Plato assignments!


My parents worked at UIUC in the early 1970's.

Plato was an early interactive learning system, the supercomputer was called the Illiac-IV.

The building was called the "Center for Advanced Computation". I don't know if the computer was in that building, but I don't think they were exactly hiding it from the Soviets.


I wonder how long that equipment would survive being exposed to the elements after the collapse


My guess is that something that important was protected by reinforced ceilings/floors.


It's not easy to make things stay waterproof after such an event. Water will find a way. The simplest thing will likely be the Achilles heel of such thoughtful engineering.


It doesn't mean that should stay there, as-is, forever. Like most failover solutions, it gives you a bit of time during an incident to come with a proper solution for the future.

Having it not fully waterproof (and maybe they also took that into account) has probably been seen as an improvement over totally crushed by rocks and bricks.


The Internet ? No. The Arpanet ? Also no. But SAGE was - its prototype (the Cape Cod air defense project) was a packet-switching network and SAGE itself was as well... And staying up on doomsday for a bit of nuclear combat was the essence of SAGE's functional specification. SAGE pioneered most of the concepts and technologies of the Arpanet, whose purpose was absolutely not combat, so it is easy to imagine how the nuclear strike resilient Internet urban legend evolved.


Found it!!! The proof that ARPAnet was based on the idea of command and control! Command & Control means a network capable of surviving a nuclear attack.

Licklider who established the Intergalactic Computer Network memo that started it all. And, who was heavily involved in ARPAnet and bringing in all the people involved including Baran and Davies. Specifically mentions ARPAnet Command and Control in his paper!

"It is necessary to bring this opus to a close because I have to go catch an airplane. I had intended to review ARPA’s Command-and-Control interests in improved mancomputer interaction, in time-sharing and in computer networks. I think, however, that you all understnad [sic.] the reasons for ARPA’s basic interest in these matters, and I can, if need be, review them briefly at the meeting. The fact is, as I see it, that the military greatly needs solutions to many or most of the problems that will arise if we tried to make good use of the facilities that are coming into existence."

https://worrydream.com/refs/Licklider_1963_-_Members_and_Aff...


Instead of quoting from the end of that document, perhaps quote from the beginning:

> In the first place, it is evident that we have among us a collection of individual (personal and/or organizational) aspirations, efforts, activities, and projects. These have in common, I think, the characteristics that they are in some way connected with advancement of the art or technology of information processing, the advancement of intellectual capability (man, man-machine, or machine), and the approach to a theory of science.

The word "military" only exists in the last few paragraphs of the document. Most of it is about workflows and resource sharing:

> When the computer operated the programs for me, I suppose that the activity took place in the computer at SDC, which is where we have been assuming I was. However, I would just as soon leave that on the level of inference. With a sophisticated network-control system, I would not decide whether to send the data and have them worked on by programs somewhere else, or bring in programs and have them work on my data. I have no great objection to making that decision, for a while at any rate, but, in principle, it seems better for the computer, or the network, somehow, to do that. At the end of my work, I filed some things away, and tried to do it in such a way that they would be useful to others. That called into play, presumably, some kind of a convention-monitoring system that, in its early stages, must almost surely involve a human criterion as well asmaching [sic.] processing.


Listen, mr throwaway account, if we're starting at the top of the document:

"The ARPA Command & Control Research office has just been assigned a new task that must be activated immediately, and I must devote the whole of the coming week to it."

You know, the line you had to skip a few paragraphs ahead of to find your quotes.


From Wizards (chapter one):

> Licklider was no exception to the rule that people didn’t spend a long time at ARPA. But by the time he left in 1964, he had succeeded in shifting the agency’s emphasis in computing R&D from a command systems laboratory playing out war-game scenarios to advanced research in time-sharing systems, computer graphics, and improved computer languages. The name of the office, Command and Control Research, had changed to reflect that shift, becoming the Information Processing Techniques Office.

[…]

> Taylor told the ARPA director he needed to discuss funding for a networking experiment he had in mind. Herzfeld had talked about networking with Taylor a bit already, so the idea wasn’t new to him. He had also visited Taylor’s office, where he witnessed the annoying exercise of logging on to three different computers. And a few years earlier he had even fallen under the spell of Licklider himself when he attended Lick’s lectures on interactive computing.

> Taylor gave his boss a quick briefing: IPTO contractors, most of whom were at research universities, were beginning to request more and more computer resources. Every principal investigator, it seemed, wanted his own computer. Not only was there an obvious duplication of effort across the research community, but it was getting damned expensive. Computers weren’t small and they weren’t cheap. Why not try tying them all together? By building a system of electronic links between machines, researchers doing similar work in different parts of the country could share resources and results more easily. Instead of spreading a half dozen expensive mainframes across the country devoted to supporting advanced graphics research, ARPA could concentrate resources in one or two places and build a way for everyone to get at them. One university might concentrate on one thing, another research center could be funded to concentrate on something else, but regardless of where you were physically located, you would have access to it all. He suggested that ARPA fund a small test network, starting with, say, four nodes and building up to a dozen or so.

* https://www.goodreads.com/book/show/281818.Where_Wizards_Sta...

Also:

> Licklider described how he had re-envisioned command and control research as research into interactive computing as follows:[5]

* https://en.wikipedia.org/wiki/Information_Processing_Techniq...

From chapter two of Wizards:

> Roberts also learned from Scantlebury, for the first time, of the work that had been done by Paul Baran at RAND a few years earlier. When Roberts returned to Washington, he found the RAND reports, which had actually been collecting dust in the Information Processing Techniques Office for months, and studied them. Roberts was designing this experimental network not with survivable communications as his main—or even secondary—concern. Nuclear war scenarios, and command and control issues, weren’t high on Roberts’s agenda. But Baran’s insights into data communications intrigued him nonetheless, and in early 1968 he met with Baran. After that, Baran became something of an informal consultant to the group Roberts assembled to design the network. […]

* https://en.wikipedia.org/wiki/Larry_Roberts_(computer_scient...


I think it is a bit convenient to toss out all the clear evidence that they were developing survivable networks.


What is clear is that:

* Licklider changes the name from "Command & Control Research" to "Information Processing Techniques Office" before he left and Robert Taylor takes over.†

* Taylor convinced Charles M. Herzfeld to build a resource sharing network.

* Taylor recruited Larry Roberts to design/build ARPAnet. Wesley A. Clark was part of the design/build team.

* Roberts didn't have any kind of goal related to nuclear survivability (per interviews with him in published sources/books).

* Roberts met Donald Davies. Davies had no interest in nuclear survivability. Davies introduced Roberts to Paul Baran's work.

Is any of the above in dispute?

Did anyone other than Baran ever express interest in nuclear survivability?

† Given Licklider efforts in the name change, can anything be gleaned by his about intentions‡ in the way he wants the office/department to go?

‡ Can anything further be gleaned by the fact that one of the first papers Licklider published was called "Man-Computer Symbiosis"? https://en.wikipedia.org/wiki/Man-Computer_Symbiosis See perhaps §5.1:

> Any present-day large-scale computer is too fast and too costly for real-time cooperative thinking with one man. Clearly, for the sake of efficiency and economy, the computer must divide its time among many users. Timesharing systems are currently under active development. There are even arrangements to keep users from "clobbering" anything but their own personal programs.

> It seems reasonable to envision, for a time 10 or 15 years hence, a "thinking center" that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval and the symbiotic functions suggested earlier in this paper. The picture readily enlarges itself into a network of such centers, connected to one another by wide-band communication lines and to individual users by leased-wire services. In such a system, the speed of the computers would be balanced, and the cost of the gigantic memories and the sophisticated programs would be divided by the number of users.

* https://groups.csail.mit.edu/medg/people/psz/Licklider.html


Norway was the first nation outside of the US to get connected to the Arpanet¹³³.

This was done to have for the US military to have "real-time" access to seismic data from sensors in Norway, that would detect nuclear activity of various forms. Primarily those in the Soviet Union.

I can at least say ARPANET was expanded to Norway for military purposes.

A family member of mine was the first foreigner to "chat" with folks in the US via ARPANET.

And later given access to compute resources in the US, for work that was a bit hush hush.

¹ https://www.nb.no/en/story/da-norge-fikk-internett/#:~:text=....

² https://www.norsar.no/about-us/history/arpanet ³ https://www.ffi.no/en/news/from-the-us-to-ffi-kjeller-was-fi...


There's a bunch of good stuff on YouTube from the folks who actually developed the Internet. Here's a couple from Leonard Kleinrock, whose MIT thesis laid out much of the math behind packet switching and whose lab sent the first Internet message:

https://www.youtube.com/watch?v=vuiBTJZfeo8

https://www.youtube.com/watch?v=rHHpwcZiEW4

And from Bob Kahn, who designed the router of that early Internet (interviewed by Vint Cerf, who invented TCP/IP):

https://www.youtube.com/watch?v=EKxNMTVnBzM

https://www.youtube.com/watch?v=hKZ6tJcQpcI

A key innovation of the Internet was packet-switching. Previous networks like AT&T's telephone system were circuit-switched: the configuration of the network, and route between source and destination, is an inherent property of the network, and once a connection is established it can't be easily reconfigured. Packet-switching makes the source and destination a property of the message, and then the network is responsible for figuring out a route from source to destination. Notably, because all information needed to specify the destination is included in the message, it can be retried or take a totally different route.

Most things have multiple causes, and the Internet is definitely one of them. The scalability and distribution properties were certainly one of them: a centralized system like the telephone network cannot scale to new uses and many new endpoints the way a distributed system like the Internet can. According to Kleinrock, the need for management to keep an eye on all the research they were funding was apparently another one of them. But given that it was funded by DARPA, the resilience of a packet-switched network to scenarios where individual circuits might go down was probably a major reason for the interest in this technology. It doesn't necessarily have to be a nuclear strike, but there were a number of scenarios of interest to RAND and DARPA that could involve a portion of the nation's communication network being disabled and still needing to get messages through.

This is also a good lesson to designers of future networks and computing systems. The end-to-end principle remains as valuable to system designers today as it did in the 1960s.


I'm an enthusiast and writer on the history of military communications technology, particularly during the Cold War. The internet is very much part of this story, and I am asked about this controversy from time to time. The problem is, I find that people who argue for both positions are becoming much too fixated on the idea that there was some single set of influences on a complex project. There isn't really any answer to "was the internet designed to withstand nuclear war" for the same reason that there isn't really any single answer to any question about the historical motivations of complex undertakings. That's just not how history works.

There are some facts which we know to be true:

1) Various components of the defense complex were actively researching survivable C2 communications, particularly beginning in the 1950s although there were earlier precedents. Many of these efforts involved ideas that were similar to those used in modern computer networking, and they sometimes culminated in built systems with meaningful similarities to the internet, like AUTODIN.

2) A diverse cast of academics, contractors, and government entities were involved in these projects. Sometimes the same people worked on multiple projects. Even when they didn't, there were often communications between these entities, but sometimes, due to security concerns, there wasn't. Much of this communication was informal, so in retrospect it is hard to tell who knew what. There can be surprises both directions.

3) Communications technologies often emerge naturally from innovations in other fields, technical advances, etc., so while many similar communications technologies have a shared intellectual heritage, it is also not that unusual for totally independent efforts to arrive at roughly the same point. Radio is a classic example.

I think that, in consideration of these facts, we can reach two conclusions:

A) It seems likely that some of the people involved in ARPANET were familiar with survivable C2 research and applied those ideas to their efforts. After all, lots of people and lots of organizations worked on these programs, and some of the research was widely distributed within the defense-industrial-academic complex during the '50s and '60s.

B) It also seems likely that ARPANET independently arrived at similar endpoints. After all, it had some similar constraints and objectives, and its creators were working with mostly the same underlying technology.

These two do not contradict each other. In fact, I think it is by far most likely that both are true in the cases of different individual people and different individual aspects of the design. That's just how these things are.

Before considering The Internet specifically, let's consider a couple of similar situations in the development of technology:

1) People sometimes do a great deal of hand-wringing over the assignment of labels like "the first programmable computer." I have always been very wary of giving these sorts of titles without a fair number of weasel words. Consider, for example, the ENIAC, usually called the "first programmable computer." And yet, there is a compelling argument that a number of the substantial design elements of ENIAC, including its programmability, are derived from work done for an earlier codebreaking machine called Colossus. This connection remained unknown for many years because of the secrecy surrounding Colossus... a level of secrecy that means that, while a number of people who worked on Colossus and later worked on ENIAC almost certainly carried over ideas, they wouldn't have admitted to having done so as Colossus was officially unknown to the ENIAC project. The particular climate of wartime and military technological development means that ideas often move around in subtle ways, and knowledge of where an idea came from is intentionally obscured. The history of military technology can be a very difficult field for this reason.

2) Almost at the opposite ends of the spectrum, information often flows very freely in academic and industrial laboratory environments, and so ideas spread without clear documentation. I am reminded of a piece I wrote years ago, on the fact that several early internet protocols use a similar set of three-digit status codes in similar ways (HTTP, for example). Oddly enough, these pseudo-standard status codes appear almost simultaneously in RJE and FTP, but neither mentions the other. Over time I have been lucky enough to get in touch with several of the authors of both RFCs, and while none of them can recall the origin of the codes, they agree with my general theory: the two separate groups, both at MIT, had just shared notes during the development of the protocols and one of them informally adopted the status code scheme from the other. People talk to each other, and ideas often move between projects without formal documentation.

So, with those two examples of subtle cross-project influence in mind, can we say anything suggestive about ARPANET? Well, there are certain suggestive details. For example, by the time ARPANET's first IMPs were built, at least one researcher (Howard Frank) was engaged in ARPANET research who had previously consulted on survivable C2 networks. But the ARPANET project had already set certain design details like packet-switching by that point... which raises the question of if packet-switching is even the important part. Howard Frank wasn't working on packet switching itself, he was working on performance and reliability modeling of topologies for packet switching, an area where military C2 research was probably generally ahead of ARPANET research at that time. So, if we take the face-value assumption that aspects of ARPANET topology research were probably based on survivable C2 research, does that mean that ARPANET was "created to survive a nuclear strike?" Or did it merely end up that way? It ends up coming down to splitting hairs about what "created" means, an exercise that sort of ignores the fact that technological developments always combine established ideas and new ones.

ARPANET was not built for military C2. It was used for military C2 later on, but during the early days of ARPANET the military had more wide-area networking initiatives than you could shake a stick at and ARPANET was not one of the ones contracted for C2 purposes.

Was ARPANET designed for nuclear survivability? The most obvious answer is "no," because the early topology of ARPANET lacked the level of redundancy in its topology that actual survivable networks of the era had. But, this seems to have been more a consequence of funding and resource availability than intentions, because ARPANET researchers had done plenty of work on performance and reliability, using basically the same methods as used for survivable networks.

So maybe, at the end of the day, the "best" answer to this is sort of a boring one: meh. Nuclear survivability was obviously not a goal of ARPANET because ARPANET did not build out a survivable network. That said, ARPANET incorporated most of the technical ideas from survivable networks of the era. It is a virtual certainty that ARPANET got some of those ideas from earlier and simultaneous research into survivable networks, but it is also a virtual certainty that ARPANET arrived at some of them independently. If you consider "packet switching" to be the main technical advancement of ARPANET, it's probably not an idea that ARPANET got from survivable C2 research, because the historical record looks pretty confident that multiple people independently arrived at packet switching. That ought not to be surprising to anyone, because packet switching is a fairly direct evolution of practices established in radio and telegraph networks almost fifty years before. But, I also think it's an unnecessarily restrictive view of ARPANET's technical contributions, and other aspects of ARPANET like routing policy were definitely influenced by survivable communications research and, to some extent, directly based on survivability work.

What all this means about why ARPANET was "created" or what it was "designed" for is strictly a matter of how you interpret those words. Yes, articles and books and etc. should not repeat the claim that "the Internet was created to survive a nuclear strike," because the truth and falsity of that statement requires a lengthy and nuanced explanation. When we express history as simple facts we should try to stick to the ones that are, like, 90% true, instead of like 50% true. But "facts" about history are rarely 100% true, we're just not that lucky. It all happened a long time ago, there were a lot of people involved, different people were doing different things, it's a tangled mess of motivations and influences. That's why we study it.

Thanks for coming to my TED talk.

Postscript: Also, packet-switching is not at all intrinsic to survivable networks, although certainly survivability lead to a lot of advancements in packet-switching. But there were also a lot of circuit-switched survivable networks, and for a good span of the Cold War, I would say that circuit-switching outnumbered packet-switching for hardened C2. You'll notice that AT&T, the military's #1 choice for hardened communications, was firmly not on the packet switching side of things. But the military also contracted C2 projects to Western Union, who were basically arriving at modern packet switching by their own route (automated telegraph routing). This schism, between packet-switching and circuit-switching, remains a critical part of the data communications story today.


This is a good amount of info for people who are not familiar with the subject. To illustrate just one part of the pre-ARPANET hardened communications networks, I'll focus on a few typical usage scenarios.

The DOD for the most part did not build their own domestic communications networks spanning multiple states and across the country. A lot of it was carried by the AT&T Long Lines network, which had a mix of barely-hardened, not hardened, and some extremely hardened sites (L4 coaxial, project offices, etc).

If one finds, for instance, the location of all of the SAGE direction centers and all of the telecommunications links that fed the remote radar sites' data to the SAGE direction centers, you'll also find a ton of AT&T Long Lines mountain top, hilltop sites, mixed in with the ordinary central offices located in mid sized to large sized cities.

In other places in the world the US DOD built its own bespoke communications networks for very specific command and control purposes. At one point in time there was a massive troposcatter network spanning nearly the entire Mediterranean sea, with important radar sites in Italy, Cyprus, linked to air bases and the DOD's backbone communications links from europe to the USA.

Similar massive troposcatter data links were built through the Aleutian islands.

Most of these were replaced by C-band geostationary satellite by the mid 1970s as that became a higher performance and more viable technology than the extremely massive, power hungry and expensive to maintain troposcatter stuff.


TL; DR: osmosis

Thanks, JB!


Probably not. Most government agencies (e.g. FEMA) aren't prepared to handle the aftermath of one let alone backbone infrastructure.

In the US, only high ranking government officials and STRATCOM are operating during a nuclear attack. Everyone else is on their own.


My understanding was that it was designed so that traffic could route around the destroyed segments. So if 9 out of the 10 routes between Lawrence Livermore and say MIT was destroyed, it would route traffic via the last route. Obviously if Lawrence Livermore is destroyed, this is all moot.

Also you have to remember back then the nukes were smaller.


Nukes were bigger, we have more smaller warheads now since we're interested in glassing surface area instead of setting volume on fire.


> Also you have to remember back then the nukes were smaller.

I always had the impression they actually got smaller, as more precise targeting reduces the need for higher yields.

If you know your bomb will detonate inside a building, you might as well use a conventional warhead and avoid all the diplomatic fallout of nuking someone.


Who needs nukes when you've got the flying ginsu?

<https://apnews.com/article/hellfire-r9x-al-zawahri-d0d25b7ed...>

Yes, more precise targeting / flight controls means smaller warheads, or simply kinetic-kill munitions.

Shaped charges also pack a punch depending on where your delivery is intended.

And of course, there is a nuclear version of same:

<https://en.wikipedia.org/wiki/Nuclear_shaped_charge>


> And of course, there is a nuclear version of same:

Sounds useful for demolition.


The Casaba-Howitzer was the weaponised application, apparently intended as a beam weapon. The mind boggles at what a nuclear shaped charge might accomplish.

The other use envisioned was for Project Orion, the nuclear-bomb-powered spaceship proposal. Directed blast would make for more efficient propulsion, it seems.


Plus, it's always better when most of the energy is communicated as Tungsten plasma.


Tungsten Plasma Beam Communication is my next band name.


I’d buy the tracks.


Nukes were much bigger then.


Irrespective of nuclear war the design goal was always to create a network of redundant nodes across wide geographical dispersion. That solves two problems:

1. Increases network durability.

2. Allows uninterrupted participation by geographically separated parties.

This is a hallmark of military technology still very much in use today and it goes as far back as the mid-1860s when the new US Army Signal Corps began experimenting with and integrating into conflict locations the first electronic communication technologies.

Geographically dispersed redundancies are still import to military communications. On a tactical level this typically involves things like radio relays on hilltops, tropospheric shots, and various line of sight technologies. On the strategic level it involves having multiple routes through the internet over different kinds of physical media moving in different geographic directions.

You don't see this as much in the commercial world, because sending network traffic through less efficient routes is slow and costly, so redundancies are only an emergency fallback. From a military perspective the network will go down at any moment, so slow and expensive are still favorable to disconnection. As a result the military will eagerly employ many layers of many redundant options. Cost is a less significant factor for the military since they are the only ones to own all layers of their own stack and thus able to operate in isolation.


I strongly recommend Where Wizards Stay Up Late, which this article cites a lot. Fascinating history of ARPANET and building the infrastructure at BBN (on the team was Will Crowther of Colossal Cave Adventure fame!). Inspiring book in the same genre as Soul of a New Machine.


Nearly all the Internet founders who are still alive are on the internet-history mailing list, which is still active. Vint is, for instance.

The author goes on and on about the 90's, I think because that is easier to document than the 60's. Tracing the evolution of the "nuclear" narrative back then -- who cares? This article is a whole lot of research proving nothing except the vapidity of the media.

The ARPANET did get started in the 60's, and packet-switching to provide "multiple paths in case one is destroyed" was indeed one of the motivations. Then it took on a life of its own and no one devoted any more thought to nuclear war.

mrighele's answer is excellent.


I found this article difficult to read and confusing.

In the title "... a nuclear strike". In the article: "... not kept in bunkers.. but on regular computers".

Well is the idea to survive nuclear armageddon, or to survive a strike?

A strike can take out a city if that happens the network continues to function - whatever nodes were just lost.

Having everything inside a hardened bunker is not required.

In so far as nuclear armageddon, I am not sure that having computers inside bunkers would help keep the internet up given it requires wiring or wireless infrastructure to communicate.

If Having computer systems still operate, even on their own, then a bunker starts naking sense.


> A strike can take out a city if that happens the network continues to function - whatever nodes were just lost.

That's true up to a limit. Regional consolidation of ISPs have led to centralization. For instance, the 2020 Nashville AT&T building bombing caused a fairly widespread outage of phone and internet spread across 5 states. It took quite a lot of work to restore service despite most of the switching equipment not being damaged.

Take out a few critical interconnection/peering points and you could have something of an internet Kessler Syndrome as a thundering herd of traffic was routed over overlapping, lower-preference routes. It would take a great deal of work by a relatively small group of people to stabilize. Work made more difficult from problems communicating, and many of the network engineers are not going to be local to the equipment they manage.

Not to mention how something like that would surface all kinds of previously unforeseen bidirectional dependencies.


Well how deep a nuclear assault can reach? In Greece there are some witnesses saying that the major internet provider backbone fiber was found buried 10cm below the road. Just after the asphalt inside the pebble.


Google Fiber was infamous for doing it this way, called "microtrenching" or "nanotrenching", where they either cut a trench into pavement and lay the cable into that and fill over with asphalt or other material. When done poorly (read: most of the installs that they've done), it erodes the road surface and the base and causes all sorts of problems [0] [1] [2] [3]

Also lots of providers just string up cable on poles, which are routinely snagged by trucks, shot by ammunition, run into by cars, etc etc, so even more vulnerable than shallow burial. Some of these cables are major links so damage can cause wide-reaching problems. No match for an out of control sedan, let along a nuclear strike. [4] [5]

[0] https://www.techrepublic.com/article/google-fiber-is-using-a...

[1] https://wpln.org/post/google-fiber-disruptions-have-some-say...

[2] https://www.nashvillescene.com/news/pithinthewind/google-tre...

[3] https://arstechnica.com/information-technology/2019/02/googl...

[4] https://www.wmur.com/article/crash-manchester-comcast-servic...

[5] https://www.cbsnews.com/sacramento/news/crash-vandalism-comc...

and etc


Well how deep a nuclear assault can reach?

Each hop of the internet uses power. The power infrastructure is above ground for long enough to be overpowered by nukes. So even if the internet were entirely under ground and even if it were entirely only fiber it would need an underground-only power feed coming from an underground-only power generation source. Most internet service providers are above ground. Some telco is underground but only useful for old pots lines and some DS lines. Satellite ground station relays are above ground. Power plants are above ground. Solar panels are above ground.

I could be wrong, so after a nuclear event we should all try updating this thread assuming M5 Computer Security is EMP hardened and has backup power and a fuel contract with a fuel company that still exists. Most data-centers are not EMP hardened.


Fortunately there are autonomously-powered protcols:

<https://www.rfc-editor.org/rfc/rfc2549>


Good point. That means you and I can communicate at least.


The first digital computer was created to calculate army firing tables, but the first program they ended up running was for studying, in true WarGames style... global thermonuclear war.

That's right... computers, the internet, GPS, weather satellites, it was all for war. Paid for by the US and other world governments.

So the next time someone says "but Tor came from the government!" you can tell them this.


That's the narrative. Just like Tor's narrative is that it helps America's spies communicate from hostile jurisdictions. The former never got the attention as back then we had monoculture shaped by mainstream media (no other alternatives) and we just ate up whatever we were told.

The latter appears to be under more scrutiny lately, leading us to believe this is just like the lofty idea that VPN encryption is completely anonymous from the big five.

Spies operating out of China or Russia would never use VPN or Tor? That would be painting a red target on their backs. So I wonder what the true intention is for Tor as is the mysterious origin of Bitcoin and so many other things. We won't know.

One thing is for sure, what we believed to be bastions of Western democracy and privacy are no more.


I'm no expert on Tor, but IIRC the story is precisely that spies operating from hostile territory would have a red target painted on them from using encrypted communications...unless a whole lot of people in that hostile territory were also using encrypted communications. This is why Tor was released open source and wide adoption was encouraged.


It's been known that if you connected to Tor in a hotel located in this US allied country (there have been briefings published around this so you can take a guess) you would immediately become visible and targeted for a drive by.

Tor just isn't as common as you think nor is it widely adopted due to unreliable and the problem with that cover explanation is that you wouldn't know where Tor is widely used in the first place to be able to find "safety in numbers".


If there is a global passive adversary, I'd still rather use Tor so that sites don't see my IP and my ISP doesn't see my domain lookups


If you use Tor without VPN as most do, it won't be your ISP that sees your domain lookups mate

If you do use VPN with Tor as some do, it won't be your ISP either.

We have an illusion of privacy because there is no true privacy anymore with digital technology and without privacy we don't have true freedom and as such we only live in a democracy in name only.


Well either way please vote blue. If neither party offers privacy, at least one offers me some other rights


What do you mean?


In theory yes, however, in pratical terms you just need to "bomb" the right places and 90% of the communications would be gone for quite a while. Think PIX and DNS root servers, destroy those, and only minor services would be available. There are countries with a single PIX, sitting in regular rooms without any kind of security, unplug those and the whole country would be offline (to be fair, intra-ISP traffic would work). And there's no need to go that far (bombing places), a bad actor that can cut some submarine fiber in the right places would cripple the whole world. Or just someone messing up BGP config in a big ISP, no need to bomb or destroy anything, a single bad command can cause major issues worldwide (had happened before).


The Internet as it is now is structured to maximize the profits of the Internet service providers.

This results in a structure very different from what was conceived originally for the purpose of being resilient to partial destruction.

For the latter purpose, the best structure is a decentralized mesh with mostly equivalent links, which is much less economical than what the Internet uses now, i.e. a hierarchy of links of increasing throughputs that concentrates the traffic into few very high-speed links that pass through central high-capacity routers, so that the parts of the network where most of the traffic is concentrated are very vulnerable and their destruction would affect everybody.


In theory the internet was designed to be fault tolerant and highly available. No bombs required.


Related. Others?

Was the internet designed to survive a nuclear attack? - https://news.ycombinator.com/item?id=33402256 - Oct 2022 (115 comments)

HN: Was the internet designed to resist nuclear attacks? - https://news.ycombinator.com/item?id=20415376 - July 2019 (3 comments)


How about banking infrastructure and data?

(I know I know, the answer is banks are the least of your concern after a nuclear war)


Banks would actually be of primary concern after a nuclear war. Without them, there would be little economy to speak of.


The original creators of the early version of the Internet wanted it to be open and equal, making sure one entity couldn't control it. The fact that this design also would be helpful after a nuclear attack was used to get the required funding from the US military.


If it is never tested we can safely assume that no, it doesn't work.


Judging from some of the outages I've lived through, the Internet or at least the web is not even designed to have a billion people using it


Does flash photography harm the internet? https://www.youtube.com/watch?v=Vywf48Dhyns


It was designed to survive a nuclear strike, but since the TCP/IP stack offers no protection against a backhoe or a shovel cutting the cables, I'm skeptical about such claims.


It sounds like a mastermind plan, but in reality it's a consequence of a global intelligence working together on the same project over xx years.

It was certainly not designed with that much hindsight, otherwise we would never have had the infamous BGP-spoofing attacks (which were basic and easy at the time), the ARP-spoofing, not enough IPv4, etc.

Internet has iteratively evolved to be reliable, as each iteration has improved on top of the previous one. And this is not due to geniuses, this is due to sys/netadmins who want to sleep at night more, so they are forced to choose and operate reliable techs.


It's a great way to sell it and it worked. Fortunately, the product (the internet) works even better that the slogan used to sell it. One derivative claim, "the internet treats censorship and routes around it" is not used much these days, but I do remember it actually being close to reality, for a brief moment in time.


These days its more like an anchor from a ship cutting a sea cable.

Interesting to think about in the event of something like WW3, would these be some of the first things severed?


Absolutely. The are the spine of the world's communication, cut them and no more global internet.


> I was unable to find a way to reach Ed Krol or Bruce Sterling ...

Bruce Sterling was on Twitter when this was written, and I follow him on Mastodon now. I'll contact him there.


Some previous discussion in 2022: https://news.ycombinator.com/item?id=33402256


A solution can have more than one benefit or perceived positive outcome.

We were able to use system time between universities and (we believed) it could also survive a nuke attack.


The benefits of a research project as described to the funding authority and the benefits of a research project after it is done are often not the same.


Kind of offtopic, but the font on the website seems really bad for me on both my phone and PC. Thankfully reading mode exists.


The more important modern question: was it designed to survive a software update?


I notice the title has changed. HN title is "Was the internet created to survive a nuclear strike?", but the web page's current title is "Was the internet designed to survive a nuclear attack?"

Cars were designed to hold drink cups. They weren't created to hold drink cups, but at this point in the history of the development of cars, they do hold drink cups by design.


Is anyone struggling to read the font?

I have a 34 inch 4k ultrawide if that makes a difference.


Also designed to survive a rogue backhoe (digging up a cable)


fyi - that font is almost illegible on chrome on x11 @ 96dpi


Ok internet, now do boat anchors.



The interesting thing from the first link is the Studies section, which indicates that Betteridge's Law is generally not accurate.

https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines#...


True. But referencing his "Law" is still a humorous way to summarize an article in an HN comment.


Is Betteridge's Law of Headlines Accurate?


Are All Generalisations False?


I mean, basically.


No.


No.

There are massive ISP trunks that have accidentally been destroyed knocking out internet for hundreds of people.


Licklider who established the Intergalactic Computer Network memo that started it all. And, who was heavily involved in ARPAnet and bringing in all the people involved including Baran and Davies. Specifically mentions ARPAnet Command and Control in his paper.

At the beginning he says, "The ARPA Command & Control Research office has just been assigned a new task that must be activated immediately, and I must devote the whole of the coming week to it."

And, then goes on to say.

"It is necessary to bring this opus to a close because I have to go catch an airplane. I had intended to review ARPA’s Command-and-Control interests in improved mancomputer interaction, in time-sharing and in computer networks. I think, however, that you all understnad [sic.] the reasons for ARPA’s basic interest in these matters, and I can, if need be, review them briefly at the meeting. The fact is, as I see it, that the military greatly needs solutions to many or most of the problems that will arise if we tried to make good use of the facilities that are coming into existence."

https://worrydream.com/refs/Licklider_1963_-_Members_and_Aff...

It is strange the the article tries to start by saying it had nothing to do with Baran's research into bomb resilient switching. But, actually they relied heavily on his research and actually Davies roped him in when designing the early ARPA and Internet. Hence, it is based on his bomb resilient switching, and therefore was based on ideas that were meant to survive a nuclear strike.

"You and I share a common view of what packet switching is all about, since you and I independently came up with the same ingredients. ... and [you were] the first to reduce it to practice." - Paul Baran to Davies

Davies also got his start at Tube Alloys project in UK and worked at NPL, in nuclear weapon related projects.

I guess you could argue that beyond that they didn't really think much about it further. But, I somehow doubt that they weren't thinking about it as they continued. Kahn was heavily involved in the project and he also created the Strategic Computing Initiative, which among its many other goals also funded supercomputing for large scale simulation of atomic bombs. A nuclear war was very much on the mind of everyone involved.

This whole article feels like a misinformation propaganda piece. For what purpose I don't know.

The website is registered in Reykjavik, Iceland and hosted in Oklahoma. Which doesn't really tell me much.


From the article: "Also there’s a conspiracy tendency when it comes to grim folklore. Perhaps people denying the nuclear war connection have a political agenda, they were misinformed or they are too scared to admit it. It has its own defense built in that permits people trying to correct the narrative to be dismissed as trying to push an opinion or occluded political agenda."

Not saying that that disproves it, but it's somewhat amusing that it's lampshaded directly in the source that you feel might be propaganda.


It does not matter all that whether the Internet, let alone ARPAnet, was created for that purpose. And the reason is that "The Internet" is not built and implemented by a single authority nor a fully-harmonized set of independent entities; and many/most of those are not committed to initial Internet design goals. They're trying to promote their own interests - commercial, governmental etc. - within the framework of the requirements they need to meet to operate recognized autonomous systems and participate in the higher-level routing using BGP:

https://en.wikipedia.org/wiki/Autonomous_system_%28Internet%... https://en.wikipedia.org/wiki/Border_Gateway_Protocol

Do ASes get set up while maintaining/buttressing nuclear-strike resilience of the inter-AS network overall? I'm rather doubtful.

Do ASes get set up so that there's nuclear-strike resilience within the AS? Absolutely not. I mean, some might, but you're welcome to ask your typical ISP, or commercial corporation whether they plan for that.

----

... and that's before we mention the tendency in recent years for most (?) of the traffic to focus on a relatively small number of large "platform-website" providers, like Google, Facebook, and others, and their counterparts in China, Russia and elsewhere. While those have their own resilience goals - it is entirely up to them whether they want to plan for continued operations past a nuclear strike.


In terms of what could bring the internet down, unrestrained capitalism is far more of a uniform existential danger and will destroy the internet long before a nuclear strike does.


Weirdly self assured article? Still good someone is talking about history but let's not forget that securing the budget for all of this came out of the literal war chest of the US.

Not an Elon fan but FWIW and AFAIK Starlink's usage during the current war in Ukraine is the fully realized DARPA plan of the last private (?) company in a long line providing internet infrastructure since ~ about 1983 when both TCP/IP and Reagan's general privatization efforts slowly started to "take over" and extend the academic internet. All for the purpose of warfare, everything else was extra. Including keeping Licklider and generations of hippies happy / happily designing and building / maintaining the early waves until businesses could take "the formulas" and go from there.

Vint Cerf:

> The earliest demonstration of the triple network Internet was in July 1977. We had several people involved. In order to link a mobile packet radio in the Bay Area, Jim Mathis was driving a van on the San Francisco Bayshore Freeway with a packet radio system running on an LSI-11. This was connected to a gateway developed by .i.Internet: history of: Strazisar, Virginia; Virginia Strazisar at BBN. Ginny was monitoring the gateway and had artificially adjusted the routing in the system. It went over the Atlantic via a point-to-point satellite link to Norway and down to London, by land line, and then back through the Atlantic Packet Satellite network (SATNET) through a Single Channel Per Carrier (SCPC) system, which had ground stations in Etam, West Virginia, Goonhilly Downs England, and Tanum, Sweden. The German and Italian sites of SATNET hadn't been hooked in yet. Ginny was responsible for gateways from packet radio to ARPANET, and from ARPANET to SATNET. Traffic passed from the mobile unit on the Packet Radio network across the ARPANET over an internal point-to-point satellite link to University College London, and then back through the SATNET into the ARPANET again, and then across the ARPANET to the USC Information Sciences Institute to one of their DEC KA-10 (ISIC) machines.

> So what we were simulating was someone in a mobile battlefield environment going across a continental network, then across an intercontinental satellite network, and then back into a wireline network to a major computing resource in national headquarters. Since the Defense Department was paying for this, we were looking for demonstrations that would translate to militarily interesting scenarios. So the packets were traveling 94,000 miles round trip, as opposed to what would have been an 800-mile round trip directly on the ARPANET. We didn't lose a bit!

https://netvalley.com/archives/mirrors/cerf-how-inet.html


I mean, no horse in this race or anything but sometimes there are informal reasons why a project gets funded. Getting the five-star to sign off some of their budget on something new and uncertain may require the backroom, unpublished message passed, 'because nukes make go boom'.

I'm all for the founding fathers of the internet asserting that it wasn't a cold war imperative, on citeable paper, but that doesn't out-of-hand invalidate somebody's assertion that the culture motivating grant funding in the halls of defense at the time was continuity of government or C&C during the armageddon that everyone was anticipating.


yes


As Al Gore.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: