Hacker News new | past | comments | ask | show | jobs | submit login
Claude Shannon at Bell Labs (ieee.org)
139 points by woodandsteel on July 15, 2017 | hide | past | favorite | 30 comments



Interesting links off the OA to the generation after Shannon who implemented a lot of the ideas.

>> Reflecting on this time later, he remembered the flashes of intuition. The work wasn’t linear; ideas came when they came. “One night I remember I woke up in the middle of the night and I had an idea and I stayed up all night working on that.” <<

Do people think that putting Shannon somewhere like the Institute for Advanced Study would actually have quickened up his thinking? Or is a level of distracting background activity actually helpful?


I don't think it would have helped, but I suspect it's less a distracting background offering serendipity/weak contacts and more basic expectations of finishing and communicating things.

The IAS has something of a bad reputation for fostering rest rather than revolution (I think both Hamming and Feynman criticized it in rather strong terms). You should also remember that Shannon was given carte blanch at MIT because he was so famous and respected, and that's exactly when his public output went to zero. Not that he didn't keep himself busy (he did a ton of stock trading and other things which is where Shannon's volatility harvesting comes from etc) but without any kind of external constraint or direction... I was shocked to learn that Claude Shannon died in 2001, because from how he drops out of all histories in the '50s-60s, I always sort of assumed he had died around then, relatively young.

(Probably a fair number of HNers could learn from Shannon's bad example: shipping matters!)


Huh? He did more than a lifetime of work, and happened to do it early in life. It sounds like he worked on whatever was interesting to him. If those things weren't interesting to others, so be it.

I find it very odd that you would call this a bad example and relate it to the relatively prosaic idea of "shipping". Shipping is good when you need feedback, but that's not the kind work he was doing.


He did less than a lifetime of work because he did almost all of his work early in his lifetime. Then he selfishly betrayed his agreement with society and MIT by spending decades of tenure fiddling with his toys, and underachieved his potential. He probably could have done so much more with his talent if he had just some more structure to his life. That is why he is a cautionary example - he was a victim of his own success. You can have all the talent in the world and casually revolutionize fields as different as genetics and electronics when you bother to finish something, but if you don't put in the work, nothing will happen and you will squander it all.

> Shipping is good when you need feedback, but that's not the kind work he was doing.

No, Shannon was a genius and didn't need much in the way of feedback. Shipping is good because it requires you to complete the project and take it from 99% finished & only 1% useful to 100% finished & 100% useful, and it makes it available to the rest of the world, instead of buried in your estate's papers for a journalist like William Poundstone to uncover decades later after it would have been useful if it had been completed & published in a timely fashion.


This is one of the odder things I've read on the Internet... I think you're misunderstanding the nature of fundamental research.

Nobody knows in advance which ideas will be groundbreaking and what won't. Most people who have made big discoveries spend huge amounts of time on things that will go nowhere, or "toys". I'm pretty sure there's at least an entire chapter of one of Feynman's books devoted to this.

People who make big breakthroughs tend not to be the kind of people who are self-consciously trying to make big breakthroughs. They just do what they want, guided by their own curiosity.

The whole point of tenure is to insulate you from pressure on what to work on. Maybe some people at MIT were critical of Shannon; I have no idea.

But that's the nature of creativity. You can't manage it. If MIT wanted they could get rid of the tenure system, but that would be a horrible idea.

If he didn't want to work on things he was "supposed to", so be it. The entire theory of information wasn't something he was "supposed to" work on either.

Your idea of 'shipping' also seems to indicate a misunderstanding of the research process.


I understand fundamental research just fine. I suggest you reread my comments and perhaps also read the bio and earlier materials on Shannon like _Fortune's Formula_. Finishing papers does not blight precious snowflakes like Shannon, nor would it shatter his delicate psyche. Not finishing drafts of papers was not critical to his genius and creativity. Shannon is far from the first person to procrastinate and be passive-aggressive, and the remedies would have been the same for him as for anyone else if his environment had been less in awe of him and following Romantic ideas of geniuses like those you espouse where their sacred solitude cannot be disturbed.

> Your idea of 'shipping' also seems to indicate a misunderstanding of the research process.

Publishing is pretty darn critical to the research process...


This sounds like a promising book. I read the The Idea Factory a few years ago, which is a related and fantastic book about the history of Bell Labs.

Around that time I came across an interesting idea. I don't remember if it was in the Idea Factory, or in material I read afterward, but it's related to one of central ideas from this excerpt:

The sender no longer mattered, the intent no longer mattered, the medium no longer mattered, not even the meaning mattered: A phone conversation, a snatch of Morse telegraphy, a page from a detective novel were all brought under a common code.

The idea I came across is that:

    Shannon's information theory, devised at AT&T, indirectly led to the demise of AT&T's monopoly.
Before Shannon, there was no concept of the information-carrying capacity of a wire. And AT&T's monopoly was largely due to it having the biggest set of wires, which as you can imagine were expensive to deploy in the 19th/20th century. I remember that calling California from NYC was a huge achievement, precisely because of the number of physical wires that had to be connected. AT&T was the first to offer that service.

So I think the argument was that it made economic sense for a single organization to own all the wires, so it could maintain them with a set of common specifications and processes. But if you can reduce every wire to a single number -- its information-carrying capacity -- then this argument goes out the window. You can use all sorts of heterogeneous links made by different manufacturers and maintained by different companies.

(I'm not sure if this is historically accurate, but technically it sounds true.)

So my thought was that there's an analogous breakthrough waiting to happen with respect to cloud computing. Google and Facebook have information monopolies based on centralized processing of big data in custom-built data centers. Likewise, AWS has a strong network effect, and is hard to compete with even if you have billions of dollars to spend.

So my question is: Is it possible there will be a breakthrough in decentralized distributed computing? And could it make obsolete the centralized cloud computing that Google/Facebook/Amazon practice? Just like AT&T had no reason to be a monopoly after Shannon, maybe a technological breakthrough will be the end of Google/Facebook/Amazon.

Maybe this idea is too cute, and you can poke holes in it on a number of fronts, e.g.:

- Shannon's ideas were profound, but they didn't actually bring down AT&T. AT&T was forcibly broken apart, and there are still network effects today that make re-mergers rational.

- Centralized distributed computing will always be more efficient than decentralized distributed computing (?) I'm not aware of any fundamental theorems here but it seems within the realm of possibility. (EDIT: On further reflection, the main difference between centralized and decentralized is trust, so maybe they're not comparable. Decentralized algorithms always do more work because they have to deal with security and conflicting intentions.)

But still I like the idea that merely an idea could end an industry :)

Relatedly, I also recall that Paul Graham argued that there will be more startups because of decreasing transaction costs between companies, or something like that. But it still feels like the computer industry is inherently prone to monopolies, and despite what pg said, the big companies still control as much of the industry that Microsoft did back in the day, or maybe more.


> So I think the argument was that it made economic sense for a single organization to own all the wires, so it could maintain them with a set of common specifications and processes. But if you can reduce every wire to a single number -- its information-carrying capacity -- then this argument goes out the window. You can use all sorts of heterogeneous links made by different manufacturers and maintained by different companies.

> (I'm not an electrical engineer, so I have no idea if this is all true, but it sounds plausible.)

You're describing the internet. :)

> Is it possible there will be a breakthrough in decentralized distributed computing?

The hard problem is security. Right now you have to trust Amazon et al with your data, which is not really what you want, but even that is better than having to trust Some Guy running a host out of his garage.

This isn't a real problem for static content. That you can just encrypt, throw it up on IPFS or similar and add federated authentication for access to the decryption keys.

But data processing is something else.

There are people trying to solve that with encryption too, but it's hard, and to a large extent equivalent to making effective DRM, which is not a desirable thing for your problem to be equivalent to.

A different approach is to trust people who you actually trust. We're now at the point that a fast processor consumes a single digit number of watts. So you can run your own server at home, and so can your friends and family, which allows you to pool capacity for load sharing and higher availability.

Then services become software you install on your trusted pool of servers.

There is no technical reason that can't exist, people just haven't done it yet (or they have but the future is not evenly distributed).


You quoted me before an edit -- what I meant was: Did things actually happen that way historically? That is, did engineers actually accept/design more heterogeneity in the physical network as a result of Shannon's ideas?

I'm talking about networks with heterogeneous physical specifications -- that happened long before the Internet. You have the same problem with just a plain analog circuit-switched phone system. No digital computers involved.

Static content isn't really distributed computing; it's more like networking. The type of breakthroughs I'm thinking of are more along the lines of block chain, homomorphic encryption, differential privacy, zero-knowledge proofs, etc.

In other words, different ways of collaborating over a network.

The thing that jumps out at me is that most of these technologies are fantastically expensive computationally.


You name fantastic cryptographic technologies! I was an R&D engineer in a European Telco, so maybe my understanding could be useful but my ideas may not be shared by all:

>> did engineers actually accept/design more heterogeneity in the physical network as a result of Shannon's ideas?

I would say engineers have little to say about decisions in a huge company like ATT. In a company having hundreds of thousands employees, decisions are taken 3 or 4 levels of hierarchy above them.

And those decisions are, as usual in human matters, mostly never rational from the company PoV, more than often a decision is taken between CEOs only, or because someone high in the food chain wants to go higher or gain some power on a related organisation or gain corruption money by pushing a manufacturer technology.

That said I think engineers were never bothered with Shannon work. It is quite arid and when we read it now, more than 50 years later, we may read things that were not written intentionnaly in the original paper. I must confess that when I read Shannon's seminal paper out of curiosity in the 2000', I was hugely disappointed.


> I would say engineers have little to say about decisions in a huge company like ATT.

The AT&T of the Bell Labs days was not like a modern telco that just takes bids for everything from Cisco et al. AT&T designed their own phones, phone switches, computers, microprocessors, programming languages, operating systems, etc.

> I must confess that when I read Shannon's seminal paper out of curiosity in the 2000', I was hugely disappointed.

That's always the way with seminal papers. The ideas are by now so integrated into modern society that we can't even imagine what it was like for that to be a revolutionary concept.


>> The AT&T of the Bell Labs days ... designed their own ...

True, I am old enough to have see my own employer (France Telecom) doing the same: Computers : SM80 and SM90 [0] and heavy involvement in other computers like the LCT3202. Operating system: A clone of Unix in Pascal! Phones indeed: S63 and older phones, Phone switches: E10 and software of 11F.

For me the 80" were the good old time, you well describe what came after that "just takes bids for everything from Cisco et al"

[0] http://www.feb-patrimoine.com/projet/unix/sps7.htm


> Did things actually happen that way historically? That is, did engineers actually accept/design more heterogeneity in the physical network as a result of Shannon's ideas?

https://en.wikipedia.org/wiki/Time-division_multiplexing

https://en.wikipedia.org/wiki/Packet_switching

Shannon's paper was 1948. TDM was in commercial use in the 1950s. Even ARPANET was in the design phase by the 1960s.

The thing about information theory is that you can use it in practice without understanding all the math. TDM existed in the 19th century. You can even study human language in terms of information theory, but that predates 1948 by a million some odd years. And we still don't understand all the math -- information theory is related to complexity theory and P vs. NP and all of that.

As to whether heterogeneous networks would have dethroned AT&T, we don't know because the government broke them up just as packet switched networks were becoming mainstream. Moreover, the telecommunications market even now is not a model for how market forces work. You still can't just go to Home Depot, pick up some fiber optic cable, connect up your whole neighborhood and then go collectively negotiate for transit.

It's a lot easier, regulation wise, to go into competition with AWS than Comcast. And AWS correspondingly has a lot more real competitors.

> Static content isn't really distributed computing; it's more like networking.

It's distributed data storage, but yes, the solutions there are very similar to the known ones for networking.

> The thing that jumps out at me is that most of these technologies are fantastically expensive computationally.

The solutions we use for networking and data storage trade off between computation (compression, encryption) and something else (network capacity, storage capacity, locality), which are good trades because computation is cheap relative to those things.

It's much cheaper computationally to send plaintext data than encrypted data, by a factor of a hundred or more. But the comparison isn't between encryption and plaintext, it's between encryption and plaintext that still has to be secured somehow, e.g. by physically securing every wire between every endpoint. By comparison the computational cost of encryption is a bargain.

But if you have to use computation to secure computation itself, the economics are not on your side. Processor speed improvements confer no relative advantage and even reduce demand for remote computation as local computation becomes less expensive.

Where those technologies tend to get used is as an end run around an artificial obstacle, where the intrinsic inefficiency of the technology is overcome by some political interference weighing down the natural competitor.

The obvious example is blockchain, which would be insurmountably less efficient than banks if not for the fact that banks are so bureaucratic, heavily regulated and risk averse. But if banks start losing real customers to blockchain they'll respond. Hopefully by addressing their own failings rather than having blockchain regulated out of existence, but either way the result will be for blockchain to fade after they do.


>The obvious example is blockchain, which would be insurmountably less efficient than banks if not for the fact that banks are so bureaucratic, heavily regulated and risk averse. But if banks start losing real customers to blockchain they'll respond. Hopefully by addressing their own failings rather than having blockchain regulated out of existence, but either way the result will be for blockchain to fade after they do.

That's assuming it is technically possible for them to be cheaper and faster and easier to use without blockchain technology. My non-expert understanding is that it's not possible.

Also, from what I understand a key reason financial institutions are so interested in the blockchain is it helps greatly with the trust and settlement problems that at present take so much work and expense.


A book well worth reading about this time is The Dream Machine [0] (all about the people leading up to ARPA and then PARC). Shannon's work had a massive influence on everyone in computers at the time.

[0]: https://www.amazon.com/Dream-Machine-Licklider-Revolution-Co...


> So I think the argument was that it made economic sense for a single organization to own all the wires, so it could maintain them with a set of common specifications and processes.

Well, sort of. At least insofar as The Idea Factory describes it, AT&T was able to maintain its monopoly for a large number of reasons: first-mover advantage, vertical integration (through Western Electric), wartime value, political connections, public perception, and so on. It is true, though, that once people realized a monopoly was not necessarily the best state of the industry, the winds started shifting.

> So my question is: Is it possible there will be a breakthrough in decentralized distributed computing?

One could make the argument that the idea of the blockchain and the rise of Bitcoin and Ethereum embody this (:

> Centralized distributed computing will always be more efficient than decentralized distributed computing (?) I'm not aware of any fundamental theorems here but it seems within the realm of possibility.

The speed of light comes to mind, which places a hard minimum on the speed of network communications.


I believe you are thinking of The Idea Factory.


Oops thanks, corrected. The Genius Factory was also a good book, about a crazy effort to make a Nobel Prize sperm bank. :) If I recall correctly, that book included a distant relative of Edward Teller, or possibly it was some other Bell Labs luminary.


> many of Shannon’s colleagues found themselves working six days a week.

Hmm. Was the 5 day work week common by 1940?

I have a notion that we went from Sunday off only, to Sunday plus Saturday afternoon, then Sunday plus Saturday off. Not sure when that happened, or where it started.


What jumped out was the mere mention of the draft (wwii). I even looked it up.

The narrative my wwii relatives give is everyone enlisted. If you didnt there was shame that required an explanation


~60% of US military in the WW II time period were draftees. https://www.nationalww2museum.org/students-teachers/student-...

Doesn't refute the idea that shame might have been involved, but...


My history could be very distorted but the "war is bad and to be avoided" idea came to life (in my lifetime) during Vietnam years.

I cant imagine the us population rallying around a new war like they did in the '40's


Only about 25% of US military in Vietnam were draftees. Suggests there were 2 distinct trains of thought on "war is bad".


We saw similar rallying in the year following 9-11 and the initial invasion of Afghanistan. With the invasion of Iraq in 2003, it became more complicated very quickly.


If something akin to a "Pearl Harbor" struck us today I could totally see us rallying around a new war.


Yes and no. You have to remember that the draft started some time before the US was actually at war. A lot of people did volunteer, before the declaration of war and after; but many of them knew that the draft was waiting for them if they didn't.


The Idea Factory does note that there were Bell Labs engineers who were chided on the street by strangers for not being overseas.


Hmm, ieee.org still has Flash ads

[Edit]

Sorry maybe not a flash ad. Upon further inspection it was an iframe whose contents were blocked by my ad blocker, presenting me with that grey box in chrome. I'm not going to disable it to check what it actually is.


It looks exactly like AdFly at first glance. Thought I was being directed to a Minecraft Mod.


That's "professional associations" for you




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: