In NDN, all data is signed by data producers and verified by the consumers, and the data name provides essential context for security.
Centralizing the concept of security in the network's architecture will create an intractable problem. Certain parties will still want to impose their desire to be able to eavesdrop on the data. Therefore, there cannot be any real security in such centralized design for security.
The in tempore non suspecto in which it was still possible to roll out security jokes such as SSL, is over now. Nowadays, 95% of the world population (and their governments) will refuse to adopt any centralized security design, because they do not trust it.
I'm wondering how the proposed security model differs from current practices. What you've quoted doesn't seem different. After all, on the web today, data producers send a certificate which the client "verifies". At least the long list of "CA" names in my browser purports to be "verified" by an "authority".
The part that got my attention was the criminals having fooled users into revealing passwords using certificates purchased from CAs (at a total cost of $150,000).
That seems to mean the current CA system is broken. It's not a big surprise that a centralized security concept is in NDN--Verisign is one of its main supporters.
I think this is the first time I've heard of NDN, and I've only spent a short while reading about it. My first impression is:
1) It involves addressing chunks of data rather than hosts that hold those chunks. Somewhat like using URLs at the network level. So instead of IP/HTTPS where the network learns of host communications without learning of specific data exchanges, NPN would reveal those specific data exchanges to the network.
2) The mandatory "signature, coupled with data publisher information, enables determination of data provenance..." aspect would cut both ways. For we would be data consumers in some contexts and data producers in other contexts.
I hope I read something to dispel my concern, but I worry about increased metadata exposures and reduced ability to achieve beneficial levels of privacy/anonymity.
As usual, people ranting on HN without bothering to read what they're ranting about.
You're assuming centralization. All that's in the specs are slots for a locator and signature value. It's left up to the application to define trust semantics be it PKI, WoT or whatever.
That is why I'm of the mind that both canonical identity and permission need to be managed in a globally distributed fashion with blockchain. In order to facilitate the level of data transfer that would need to happen between devices to accommodate the distribution of global scale permission data, networking does need to change fundamentally.
I envision my fridge having a permission entry that neighbor can use my car tomorrow, and also that a person I will never meet has purchased a ticket for a flight to Argentina next week. That data is constantly being shared on a mesh network with my car and everybody else's I drive down the road next to, across all of my devices and those of every other participant. No government or corporation should be able to have the whole set of data, and none of it should be able to have a very long half-life.
The only way we can move forward to a truly connected version of the future is with trust, and the only way to have a truly trusted security model is to have it be globally distributed. NDN may or may not be the next version of networking to support it, but I'm rather confident that TCP/IP isn't going to be the way we get there.
As much as I like the idea that all communications should be confidential, your response doesn't refute the original poster's point, which is that any architecture that is impervious to eavesdropping would not be adopted by parties who want to impose their desire to eavesdrop on it, which is (according to the original poster) 95% of the world population (and their governments). (citation needed)
> I hope fighting that strawman you built was fun, but I really don't know where the BS snark came from.
The BS snark which started it was your still-unsupported assertion that “SSL is a security joke”.
> Well written code is always better than the crap that OpenSSL was (crap as admitted by most of security experts and groups working with it).
It's easy to criticize OpenSSL and, well, every other SSL library which has had problems. It's a lot harder to replace it and actual security experts have thus far chosen to overhaul OpenSSL rather than trying to replace it from scratch. I trust the judgment of the OpenBSD and Google security teams over your assertion that it's so easy to replace.
One of the earliest attempts to replace the the TCP/IP model (or rather the lower layers of the ISO-OSI model) was the Asynchronous Transfer Mode (ATM). Despite being a well-intentioned idea, it failed to see real world usage because of the complexity.
Along the way many developments happened. People learned to live and work with IPv4. Even IPv6 hasn't picked up despite solving some important problems. So when it comes to updating the core networking infrastructure, I don't think TCP/IP is replaceable. It just works very well now -- you can have real time chats, high throughput data lines, has time-tested code libraries, there's vast amounts of knowledge so you can build apps fast and all that.
As I understand, what this 'Named Data Networking' technology is proposing is to replace IP addressing scheme with Names. I'm not sure if the whole internet backbone infrastructure would change it's networking strategy now.
TCP/IP addressing format is very structured and that's its strength. IMHO that's actually how communication should take place; not with names that can have high-variation in format.
Although mostly of historical interest now, I think the basic ideas are still sound. But the problem (as we learned the hard way) is that deploying a new network architecture is mainly a political problem, not a technical one.
That is not right, that ATM failed to see real world usage.
ATM was the backbone of the German infrastructure run mainly by German Telekom for years. It provided a very good service especially for telephony (ISDN). Germany basically had the best telephony network in the world.
But the problem is, that IP does not fit with the 55ms time slot in ATM. That is the why all the backbones are replaced with the so-called Next Generation Networks (NGN), which basically is pure IP traffic and everything will be on top of IP, not anymore in parallel to IP. That basically means, moving to VoIP in the backbone and consequently also for the consumer end.
ATM failed to see the usage predicted by it's proponents in the 90's. The reason for that is that most of such statements about ATM as the new unified network for everything were mostly marketing bullshit disconnected from technical reality.
ATM is essentially circuit-switched technology lying somewhere between L1 and L2 that allows for efficient QoS for different services sharing same wire at the expense of ludicrous framing overhead and creating networks of such channels with reasonably simple and fast switches. The QoS part is mostly irrelevant today as faster interfaces made the problem significantly easier to solve so is the simple and fast switching. ATM's orientation towards end-to-end circuit-switched channels is what allows fast switches but also requires some external control-plane that builds and tears down the virtual channels, which I think is the major reason why ATM (and OSI in general) didn't catch on, on IP, you just send packet with destination address down the wire, with ATM you have to establish connection first (by using something essentially out-of-band and centralized).
In the end, ATM is widely used today, but mostly as pre-existing way of handling QoS and framing on top of some unrelated but relatively slow L1 technology (ATM is the first higher-level layer of both UMTS and xDSL)
Not just German Telekom, several other operators around the globe deployed ATM in the 90s for ISDN services. However, just after a few years, most companies started dropping ATM in favour of IP based networks. By "it failed to see real world usage", I meant, it failed to sustain itself in real world.
You can notice that 9% of the internet users in the US are IPv6-enabled. Germany is over 11%. Belgium is almost 30% (of course due to smaller population it's less in absolute host count).
How many IPv6 users this is in millions, is an exercise left for the reader.
The things are moving very very fast - lots of large SPs have bumped the values within this year from low-mid single digits to nontrivial double-digits, and lots more are in the pipe.
All major CDNs support it, helping IPv6-enable thousands of sites that don't run IPv4 on the server itself. I'm saddened by the fact that HN site, being Cloudflare customer, did not flip the switch - there's really zero excuses today. (http://blog.cloudflare.com/eliminating-the-last-reasons-to-n...)
Here's another data point, from my home gateway (I'm in the remaining 70% of folks in Belgium who yet don't have IPv6 so I am using Hurricane Electric tunnel - and the Vlan50 is the IPv4-only internet connection, so that counter shows IPv4 user traffic + IPv4 tunnel traffic - so you can count it as "aggregate").
Yes but <20% adoption 18 years after the protocol was designed seems a bit slow, especially given the pace of overall technological change in this century.
Figure 1 in http://www.census.gov/prod/2013pubs/p20-569.pdf shows that the first time they recorded was in 1997, at 18%. That's 16 years (actually a bit more because my understanding is rfc791 documents the running code), and still under 20% - so by the same metric, the Internet is a failure!
Of course, then we can see the tail of the S-curve: it's doubled in the next three years, then slowed down and the last measurement is 71% of households in 2011. 30% of US households don't have internet at all, in 2011.
0.05% on 7 September 2008
0.09% on 31 August 2009
0.15% on 30 August 2010
0.34% on 1 September 2011
0.74% on 30 August 2012
1.84% on 1 September 2013
4.42% on 31 August 2014
Yes, but I don't think comparing Internet adoption with IPv6 adoption is terribly valid.
The first was a radically new technology and it took years for people to figure out how to best make use of it.
IPv6 was supposed to be a purely technical improvement to deal with some deficiencies of IPv4, notably address space limitations. It should mostly concern only network and systems administrators and systems software developers and be largely transparent to end users.
It's interesting that the two seem to have similar growth curves, but given the very different audiences involved I'm not sure what to make of that observation.
Certainly if you asked knowledgeable people in 1996 how long it would take to achieve near 100% IPv6 adoption I doubt many would have predicted 20 years.
On the other hand in 1981 I suspect few would have predicted that a technology developed by DARPA would be used by people in 2000 to buy books and manage their bank accounts.
> It should mostly concern only network and systems administrators and systems software developers and be largely transparent to end users.
I'm no network engineer, but as I understand it, to support IPv6, companies need to replace their switches. I think it's fair to say that there are literally millions of switches that need replacing. We are talking billions of dollars in total investments. I really don't see how it's surprising that this will take a while. Billions of dollars don't hang on trees, companies need to earn the money before they can spend it.
At the same time, because IPv6 is used less frequently, it is more expensive. The price of electronics is determined by volume: the more you produce the cheaper it gets. This means IPv6 has a price disadvantage to IPv4, which is especially noticeable in the early years (of ~0.1% adoption). A device that is produced at only 0.1% the volume of the most popular devices will be considerably more expensive.
This is, in part, why we see an exponential adoption curve: the more people who buy IPv6 equipment the cheaper it gets, and the cheaper it gets the more people buy it, this chain reaction helps to cause the exponential adoption rate.
I'm not saying everyone will end up using IPv6, although I think it is likely, but I'm saying it should be no surprise that replacing billions of dollars worth of network equipment takes time.
Actually, most Switches are just fine and don't need replacing.
IPv6 is a Layer 3 Protocol, most "Normal" Switches operate on Layer 2 (The Ethernet Level, which stays the same and (in the best case) does neither know nor care what goes on in Layers above).
These can stay and most wouldn't even need to be reconfigured.
As for Layer 3 Switches (The ones that do some amount of Routing, too), most "brand-name" Models purchased in the last 10 Years should support IPv6.
The most hardship, from experience, comes from the apps, especially the home-grown ones.
Let me back this up with an anecdote from experience in dual-stacking the websites at my employer (a curious reader might notice that cisco.com, download.cisco.com, software.cisco.com, tools.cisco.com, cisco-apps.cisco.com are all dualstack. The last one is interesting because it hosts the ordering portal, with IPv6 being a transport for a non-trivial portion of the hardware orders).
While the main cisco.com was dualstack since v6 launch, the rest of the properties required more work, because there's bazillion different apps there, so were launched just a ~year ago.
And yet despite all the testing, once we've gone live post-testing, we realized there was one bug that slipped through. The name of the error quite especially ironic and the bug, while in a somewhat infrequently used portion, was very visible for IPv6-enabled users.
Back then the % of IPv6 users which was accessing the erroring function was low enough that we did not roll back the entire set of changes, and just had the fix developed and deployed, and the whole scenario was relatively painless. (Besides for some semi-friendly beat-up during IPv6 workgroup in RIPE meeting, where this error showed vividly since we had an IPV6-only pilot WiFi SSID along with the usual dualstack)
If the same story were to happen with 50% of IPv6 adoption ? That would hurt way way more.
The moral:
If you're a big shop - start auditing your apps now even if you do not think you need it until 3 years from now. If you're not sure - there's bazillion resources and people available to help, but for free and for money.
If you're a small shop and don't have any apps - RTFM, assess, and JustDoIt(tm), in a staged manner, of course, all disclaimers apply, etc. - the sooner you get a (small) chance to make your mistakes while doing the first steps with IPv6, the cheaper those mistakes will be. Of course best to avoid them, but.
Ok, I'm officially off my "IPv6 soapbox" on this thread, hopefully these were useful to some folks. ;-)
> Actually, most Switches are just fine and don't need replacing. IPv6 is a Layer 3 Protocol, most "Normal" Switches operate on Layer 2 (The Ethernet Level, which stays the same and (in the best case) does neither know nor care what goes on in Layers above).
But these can't do routing, I assume. I think I may have misspoke, and said "switches" when I should have said "routers".
If routers from the last 10 years all support IPv6, that's probably a part of the reason that IPv6 access to Google in the US is 10% IPv6.
Wait. Germany's over 11% - I'm probably counted as one of them.
My anecdote: I'm getting a DS-Lite connection here, that's forced on new customers for this (big) ISP. For month the (mandatory) hardware froze whenever the prefix changed or was reannounced. Basically a (silent) dead connection every 2-3 days, for a looong time. Known problem, nothing that can be done about. But .. that's the past and solved.
Currently? I cannot reach ipv6 addresses. Read that again: DS-Lite, cannot reach any ipv6 addresses while my ipv4 traffic (which .. is tunneled) works fine. I tested with quite some sites, mostly with ipv6.google.com.
Customer support says "They're out of capacity" and want to give me a normal/default ipv4 connection again, they claim that they won't even look into this problem at this point. "Won't Fix", basically.
So I do wonder what these 11% mean and if I'm really just an outlier - or if more people like me exist and maybe don't even KNOW that they are supposed to be able to use ipv6 and cannot?
EDIT: just realized we did talk some 249 days ago. Did you stay on the same ISP and DS-lite, and have an IPv6 prefix that does not allow you to ping6 towards ipv6.google.com ?
Time permitting I'll be happy to help debug this. Let's coordinate over email if you are interested, and the above assumption about your network setup is correct.
Yes, but considering that the intent of IPv6 is to replace IPv4, not just to work along with it, I would still say that IPv6 hasn't reached the critical mass. Things have started moving fast in the last 2 years, but IPv4 still carries 96% of the world's data. It's interesting that the adoption is 30% in Belgium and 11% in Germany, but in India, UK, Australia, China, Canada and many more countries, it's less than 1% (actually it's typically below .2% in most of these countries): https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...
Thanks for all the good information in your comment!
Also, another factor which will kick in as more and more folks get IPv6 is that maintaining both IPv4 and IPv6 is a bit of a pain.
Considering that you can pack the entire IPv4 internet into a single /96 IPv6 prefix (that's 4 billion times less than the address space available for a single subnet), getting your infra IPv6-only and frontending it with a stateless IPv4->IPv6 translator (or an SLB64 boxes) becomes a more and more interesting option.
This is the content side. On the eyeball side, there are also wins to be made by simplifying the infrastructure by running IPv6-only, and running IPv4 atop that as a service.
IPv6 is about 15 years old. In technology terms that's ancient. IPv6 is roughly as old as AJAX, it's much, much older than Ruby on Rails. And IPv6 solves a major, catastrophic problem with IPv4. The fact that it still is seeing only very slow uptake in adoption is a testament to its weakness, I'd say, even if it has some otherwise favorable aspects.
I wonder if it's just my naiveté however it sounds like this is more likely to produce an X400 than an SMTP.
The vision seems pretty grand and all encompassing wholesale replacement of the entire networking stack, rather than small and easy to implement iterative approach. It seems that the biggest thing the TCP/IP folks got 'wrong' was the 32 bit address space, and even that small change is taking forever to be deployed.
Yes you could certainly improve TCP/IP but is it going to be 10X better?
If the "CDN" part of it proves to be sufficiently useful, this could be deploy layered on top of IP, or wrapped in UDP or even a TCP connection. Capable clients would then "just" need a means of discovering the nearest capable router that'll let it tunnel. And while IPv6 also can easily be tunnelled, the benefits of doing so are much smaller: IPv6 doesn't give you that much if your host still has an IPv4 address too.
But if this system lets your ISP drop in a new router or two that suddenly can know just by looking at packet headers that it is allowed to returned data from a local cache instead of passing the data on to the server and waiting for a response, then it could have sufficient benefits as soon as a couple of large bandwidth hogs starts supporting it. E.g. if Netflix or Youtube made use of it
That potentially a pretty different proposition.
Then again, the question is whether they need to re-architect the lower level protocols to do this, instead of defining a protocol on top of TCP or UDP that services that are actually likely to benefit can implement.
> It seems that the biggest thing the TCP/IP folks got 'wrong' was the 32 bit address space, and even that small change is taking forever to be deployed.
i guess you are alluding to ipv6 here. and imho, ipv6 provides quite a large number of changes from vanilla ipv4. it is not just a much larger address space...
That is absolutely true but the main driver behind the replacement is the increased address space. None of the other changes seems to have been a driver at all.
So as far as the consumers go IPv4 is 'good enough' and if and when IPv6 will finally take over it will remain the de-facto world wide networking protocol used to power the internet for a very very long time.
Cisco attempting to drive a wedge between IPv4 and IPv6 in the midst of this (very very slow) transition seems like a very strange move to me, almost certainly bound to fail or in the end not replacing IPv4/IPv6 but maybe ending up as a transport layer underneath it (killing most of the advantages it would offer in the process).
And that's besides trying to replace TCP which would require re-writing/adapting of virtually every computer program active on the net today.
I don't know that they're trying to drive a wedge between IPv4 and IPv6. I would think that even NDN's supporters see it as a very long-term, post-IPv6 development.
I am surprised however to see Cisco supporting this. It's one thing to have some academic networking specialists writing papers about NDN, but for a major corporation to devote resources to a 10+ year development project with an unproven architectural basis strikes me as odd.
Cisco was involved with it since at least of 2012. They actually wrote software in the protocol as well. It was for video conference if I recall correctly.
The "forever to be deployed" part is a crucial observation. Perhaps research into how to get the Internet community to adopt new protocols is more relevant than the protocols themselves. In other words: how can we speed up the adoption of IPv6?
Maybe it's just my nature to be guarded about grand visions, but does this idea really have a good chance of succeeding? Will it displace TCP/IP given the extent of IP deployment around the world?
No doubt there are people here who are network experts who can give a more learned review than I can after quickly reading the overview on the website.
Though I'm hardly a networking expert I did some contract work implementing NDN simulations so I probably know a bit more about it than most people.
I'm highly skeptical that data entities provide a better (or even adequate) base abstraction for networking than network addresses associated with physical machines (ie. hosts, routers, etc.).
It seems to me that the problems NDN wants to solve, primarily content caching, would be better addressed at a higher abstraction level, as I think CDN's already do. This is my opinion, and I'm certainly open to being proved wrong by more qualified viewpoints.
As for the likelihood of NDN ever (a word I would almost never use, especially in regards to technology) replacing TCP/IP, it seems hard to believe given the extreme slowness with which IPv6, a comparatively minor change, is being adopted.
I am no network expert, but I guess their idea is to optimize the traffic in-between endpoints. The last segments (you <-> ISP) would still be using TCP over IP.
The NDN was designed based on today's most common Internet use cases. If you think about it, most of the time we are requesting content from specific place, but we don't really care where the server is located, what address it has etc, all we care about is the content and whether it comes from intended (trusted) source.
Assuming that the same name always references the same data, gives an edges, because now routers are aware of the data so they now have ability to cache the content locally and when someone else requests the same thing they can just forward what they have without having to ask uplink about it.
It gives an edge in certain use cases. Probably the biggest ones would be YouTube, Netflix etc. There is a lot of effort on TCP/IP network to provide great experience for the user, through CDN, any cast routing, and other tricks, with NDN you already have network that is very friendly and makes CDN unnecessary as long as you design your protocol in such way that you utilize network's properties. Another nice advantage is on lossy networks like wireless ones. For example when you requesting content which goes through many hops if there if the response was dropped, thanks to caching it can be resend from the same point it was dropped without having to go back to the source. This might also help in such network when the consumer is on the move.
NDN also has some nice properties, if for example certain name is set up in such way that can be shared by multiple parties, then it is possible to implement a chat without need of any server, which is quite cool.
Given these benefits the NDN is a two edged sword though, while it makes content publishing to many people simple it makes certain tasks harder. For example implementing something like ssh over it might be a bit difficult. In fact anything that benefits from pushing data/request (simple example from one of the project - controlling lighting infrastructure) will be complex. It is still possible to implement but it is harder to do than in TCP/IP.
As for adaptation, it is hard to say. It definitively won't be easy. The protocol is not a drop in replacement for TCP/IP everything needs to be reinvented again. You can possibly convert existing applications to work with it, and in fact it should be possible to carry TCP/IP over NDN but then you're losing all of the nice properties of the protocol. Some things would work better, for example stripping TCP/IP and having HTTP protocol implemented on top of NDN. Some people already created NDN<->HTTP gateway.
On the other hand it could be extremely beneficial with specific use cases we are somewhat struggling with, like multicasting of video. One strong point of it is that the protocol can be implemented on top of TCP/IP, and in fact that's how NDN testbed is (or at least was when I was there) implemented currently. The adaptation goal is to have a network built on top of TCP/IP and as it grows and is big enough eventually the TCP/IP layer below will collapse and NDN will take its place. That's of course assuming the NDN will handle all of our needs that will make TCP/IP unnecessary, otherwise it'll be just an overlay network.
They also try to avoid other mistakes of IPv6 and concentrating on making it attractive not just technically but also from a business perspective. That's why they are also partnering with vendors.
Source: I actually was involved in NDN between 2010 and 2012. And know people mentioned in the article in person. One of my projects was video streaming over NDN.
Most of the time we do care about (not!) exposing information to third parties. This even applies to generally lower-importance scenarios such as watching YouTube and Netflix videos.
ISPs are of special importance, because the exposures can be concentrated. We make use of end-to-end encryption with specific servers in order to reduce the information that ISPs (and others) acquire. How will we hide interest data from ISPs in an NPN world?
You simply encrypt the data, the protocol even has a support for marking encrypted data [1].
That said the NDN does not impose how you do it and it is left for the application.
Now if the data is only end-to-end you probably would do something similar to TLS. If the data supposed to be accessible by multiple users, then you encrypt it with generated key, and then encrypt the key using public keys of intended recipients.
I believe what they're proposing is largely the same, if not identical, to Content-Centric Networking from Xerox PARC.
The central idea is:
Instead of asking one particular server for some content, just ask for the content by name.
Since the content may come from any handy server, it is up to the receiver to validate it is really the content he requested. Nothing about this implies the evil "centralized security model" people are going on about. Sure, some bad actor could weasel it in later, but it's not there now.
Whatever Cisco plans to do, I won't trust it not to have a back door. After all Cisco is the author of the IETF protocol for "lawful intercept" in routers, and if I'm not mistaken they also have a pretty high placed co-chair at IETF.
Yes, and the contents can be cached by the routers, so to get a piece of a video you don't need a connection all the way to the source of that video but only to the nearest router that caches it.
That may make sense for content that has few sources and many users, like video (although I think CDN's mostly already solve this problem).
I don't think it makes much sense for interactive data and hence I don't think it's a good basis for implementing all networking protocols.
This is the old Content Distribution Network.
It does work -- provided you can easily identify a resource in the network.
URIs are hierarchal, but do not follow the network connections hierarchy.
Also, now every router needs to be able to track all the streams that go through it.
In short, everything explodes when you try to scale the thing.
I note that patents/"Intellectual Property" wasn't mentioned in the article at all. I suspect, based on the participants mostly being corporations, that the whole thing will be covered by patents.
I think TCP/IP as non-patented, slipped by the major corporations. A protocol anyone can implement, and where the "client" and "server" are pretty hard to tell apart, is disadvantageous to market encumbents, and to surveillance agencies. For instance, nobody can charge fees for implementing TCP/IP. Nobody can license content servers. Nobody can accurately attribute a packet to a legally-responsible entity ("one neck to wring").
The protocol to replace TCP/IP will be patent encumbered, it will make a complete distinction between "client" and "server", it will be centrally routed, it will be subject to surveillance, and servers will be licensed, and costly. If NDN doesn't do some or all of these things, it's already dead.
Someone who knows more than me; does this intend to complement TCP, or replace TCP? If the latter, how would one use NDN to implement a system that naturally fits the "conversation" model of TCP, e.g. an MMORPG?
I don't think you would use TCP for an MMORPG; UDP is more common in games because a dropped frame here and there doesn't matter to most games, and it's worth the lower overhead.
What could possibly go wrong? It's not like the whole internet as we know it depends at some level of tcp/ip and there's (probably) billions of lines of code depending on it.
Umm.. how about let's NOT replace TCP/IP with anything because it's may be the only well-designed thing on the Internet that actually works? If you want an impossible super-hero project to work on, try replacing HTTP instead - at least you'd actually be solving a problem.
The people who are involved in NDN had also huge part in making the current TCP/IP work.
For example Van Jacobson, whom started the idea and made huge contributions, one of them was implementing congestion control in TCP/IP. Some people don't know but in early 90s the Internet actually collapsed under the traffic and was practically unusable until his fix.
Lixia Zhang for example was working on TCP/IP since 1981 she was responsible for Resource ReSerVation Protocol (RSVP), which is implemented by almost every major router vendor today for Internet resource management and traffic control applications.
Its a Text Transfer Protocol, you can even build applications with text only clients and servers. One only needs echo, bash and netcat to make a server and client.
Something can be secure enough that the computational power required to break it is probably not available to various actors up to and perhaps including nation-states (e.g. RSA), yet still well short of the "requires more energy to compute than is available in the visible universe" benchmark (e.g. AES, probably). Yet both could still be regarded as "secure".
Also, protection against different kinds of attacks. For example, we can consider SHA3 more secure than SHA2, because it's not vulnerable length extension attack. Likewise, a system which protects against passive attacks is secure against passive attacks, but the system which protects against passive and active attacks is more secure than it.
Nothing except efficiency is preventing us from using names as parts of network/subnet hierarchy instead of numbers, e.g. : steve.home.town.country instead of 192.168.5.6 (or the same thing on IPv6), and even efficiency could be improved by the smart use of hashing... BUT! The major problem I see here is that there simply are more numbers than words.
In practice, especially at large companies, it will certainly without a doubt degrade into workstation001, workstation002... workstation999 and then we're in effect back where we started from - using numbers.
This looks like a solution in search of a problem.
NDN assigns names (or addresses) to data contents, not physical machines/interfaces like IP does. So it's conceptually quite different from the way IP routing works.
The issue you mention is already solved by DNS.
This looks like a solution in search of a problem.
NDN attempts to make content distribution more efficient through caching. Whether solving that problem justifies rewriting the entire network stack is highly questionable.
While content distribution is a significant motivation, a fair amount of the current research is looking at benefits beyond caching: i.e., what do you get with web-style semantics at the lower layers, per-packet crypto, name--rather than host-based addressing, etc.
Centralizing the concept of security in the network's architecture will create an intractable problem. Certain parties will still want to impose their desire to be able to eavesdrop on the data. Therefore, there cannot be any real security in such centralized design for security.
The in tempore non suspecto in which it was still possible to roll out security jokes such as SSL, is over now. Nowadays, 95% of the world population (and their governments) will refuse to adopt any centralized security design, because they do not trust it.
In my impression, the project is dead on arrival.