Was the query for a domain name or just hacker+news? How do we know?
Does <title> make any difference?
I don't think of this site as "Hacker News". I think of it as ycombinator, and the subdomain, news.
Should users of hackernews.com think of that site as something else, e.g. whatever is between the title tags?
A searchable list of domain names, ranked by popularity. Or even a searchable list of main page titles. Is that how some users are using Google? If so, Google does not need a full, current cached copy of the crawlable web to provide that.
Unsure. On several occasions I got the distinct impression that a lot of members aren't actually interested in "hacking" (for any of its definitions), instead you get people asking about the value of jailbreaking a phone: http://news.ycombinator.com/item?id=3169000
I suppose it is because the top hits below HN are perhaps even less about "hacker news" ?
Well, this is why NFL Films and the old programs they produced in the 70's and 80's are so cool. NFL Films had it all. Every angle, every sound plus the all-22. They could do the full analysis. And their choice of music was, in retrospect, brilliant. I can watch those old programs year after year. Somehow I never get tired of them.
While ESPN has some problems, I'm not going to complain about them as the only way I can watch my College team play while in Canada is through their espnplayer.com service. My wife thinks I'm crazy for paying for it but hell it's an addiction. If I can't be in the stadium I have to see every play somehow. (I'm sure I'm not the only one watching then end of the game when loosing by 50 points right? )
Perhaps it's worth considering that if a new network is extremely slow compared to the present "expensive name brand hardware"-powered internet then it might not attract as much attention from the "streaming premium content" crowd. They might just leave it alone.
Who knows, by building such a network you might in the process prove that there's more value to an internet than simply as a new channel over which to sell branded entertainment and disseminate advertising. You might also show just how much can be done over a 56k link with a little creativity.
It takes balls to say anything bad about Javascript, CSS and "web apps", especially to an audience full of web designers. If I could upvote you I would. But my browser does not support Javascript and I don't think upvoting is REST-friendly.
Please tell me that was sarcasm, man. It's one thing to support someone having the balls to come out and courageously support an unpopular but just position. Unfortunately his comment was just flame bait and not even close to courageous.
Great example of an idea that was initially widely rejected only to be later widely accepted. How many times does that happen in science?
Seems to me that the biological reality is that bacteria rule the Earth. They always have and always will, before we were here and after we're gone. And it seems at present there is _relatively_ little biotech we can do without the help of bacteria and their amazing junk-free genome. But I could be wrong. Corrections from the experts are welcome.
This is reminiscent of a classic trope used to introduce students to some of the more advanced concepts in evolution.
Which is more evolved: a bacterium or a human?
To answer, you have to consider what it means to be "more evolved". A human has been subject to more branchings on the tree of life, but why does the tree of life branch? Usually because there is a new niche to occupy. And what is evolution? It's the process of exploring new niches and becoming "best fit" for a niche. So, humans have explored more niches in biological evolution, but all the while bacteria have instead continually adapted to become better and better fit for their niche.
Ultimately, the consequence of this is that it would be much easier for humans to be evicted from their niche, or for that niche to move just far enough for humans to no longer be able to cope. So, yes, bacteria will be here long, long after we are gone...
But if you really want to talk about ruling the Earth, well, viruses pretty much have that one in the bag. Go down to the sea with a teaspoon and scoop up some water. You'll have around 1 million bacteriophages (viruses that infect bacteria) in that one spoon.
Aren't bacteria move evolved in a sense? Vertebrae are all almost the same. (Or even multicelled organisms.) Bacteria come with much more diversity in genetics and biochemistry.
In the world of bacteria, there is no need for the SEC, lawyers or fear of liability. It's the fact that their needs are so few - they can seemingly live almost anywhere under almost any conditions - that makes them appear so resilient.
Is it adding features? (to use the software lingo)
Or is it subtracting ones that have no real benefit, striving toward greater efficiency?
Like terse lines of code in a concise, well-written computer program, every bacterial gene seems to exist for one or more reasons. There is no bloat.
I'm biased, but to me bacteria, chloroplasts and mitochondria are life's most amazing machines and the tasks they perform are ultimately life's most important ones.
I read that Xerox did this between two of their buildings many years ago. Apparently it ran across some portion of a motorway and they had to turn if off occassionally. I've forgotten where I read this. Maybe someone else knows the full story.
If the goal is to connect peer to peer with a small group of people you know in person and can trust, I see great potential. People congregate in small groups. Facebook friends, Skype contacts, etc. The advantage here is that third parties like Facebook, Microsoft and a gazillion advertisers are not involved. If it's small like that, it's doable as an overlay without using wireless as long as at least one person has a reachable IP and can act as the keeper of everyone else's address info.
If the goal is to create some sort of www replacement that must scale to global internet sizes, where any stranger can connect, and where kids are allowed to do all the things they're not allowed to do legally on the www, I see big problems.
>and where kids are allowed to do all the things they're not allowed to do legally on the www, I see big problems.
How in the seven hells is this a problem of the network protocol? Or any other technology, really? I seriously don't want to take this offtopic, but what you are describing is a problem of the parents/guardians first and foremost, and has absolutely nada to do with the technology we are discussing.
Appeals to emotion like this and the sad truth that they work so well are the exact reason we need decentralized, censorship-resistant networks in the first place. To put it polemically: No, I don't want to "think of the children" because that's the job of their goddamn parents.
Not protocols, but usage. Not emotion, but common sense.
If Skype, a peer to peer network that uses a proprietary protocol and third party servers (neither of which is a prerequisite for a peer to peer network), was used primarily for file sharing over encrypted links, they would have some "big problems", as in "heavily funded lobbyists and plaintiffs", to deal with. These are the same "big problems" that are the driving force behind SOPA and consequently the same ones that have injected some steam into this reddit "think tank". SOPA has some interesting language where it refers to "or any successor protocol". Perhaps the next revision, or the next bill of this nature, will include language that refers to "any internetwork", present or future.
How a peer to peer network is used and who uses it does make a difference in terms of its acceptance and survival, even if in theory it shouldn't.
Here is a distinction I've been trying to make clear over the past couple of days.
While our "vision" of the absolute end goal sounds slightly more like the second, our actual goals are to produce the first. This is a much more realistic plan than our vision, and is what the project really aims to do. The vision just aims to bring everyone together about a set of issues that have been very much discussed in recent times.
How are we blocking Googlebot? If we're using robots.txt then they can simply ignore it. Googlebot can begin to identify itself differently. There are a million ways to get around a Googlebot ban and I wouldn't be the guy who thinks he's smarter than Google. You'll lose that one. They'll find a way.
But anyway, this isnt really relevant. Can you tie it in for us?
Blocking is not done via robots.txt. It would more likely be IP-based.
Impossible to block Google? Probably true. But only because they have been allowed to grow so large as to be indefeasible. And the reason they've been able to do that is because of what the commenter said: websites allowed them to crawl, fast and hard, year after year. This is not true for all bots.
I'm not sure how this is relevant to the article and the specific issue. I don't disgree with what Google is doing in this case. And I understand why and where Google is headed.
The issue of who is allowed to crawl and who is not is something the commenter raised. It's a huge issue that people take for granted, in my opinion.
Does <title> make any difference?
I don't think of this site as "Hacker News". I think of it as ycombinator, and the subdomain, news.
Should users of hackernews.com think of that site as something else, e.g. whatever is between the title tags?
A searchable list of domain names, ranked by popularity. Or even a searchable list of main page titles. Is that how some users are using Google? If so, Google does not need a full, current cached copy of the crawlable web to provide that.