You know, this [1] really ought to permanently put away the idea I think you're trying to reference, which is that only "event based" systems can be performant. There's plenty of "thread" or "process" based approaches that do quite well, including I believe the uppermost tier of every benchmark on that site. The idea that threads or processes are intrinsically slow was sheer unmitigated propaganda, and probably not only failed to contain a grain of truth, but are actively false. (Some thread implementations were slower than others, but that turns out to have been the implementations rather than the idea.) Event based systems inevitably have a lot of function calls in them, and that will probably in the end be slower than properly done threads or continuation-based approaches, always, because of that overhead.
People measure different things different ways and then draw conclusion (or tweak measurement parameters until it supports their already pre-conceived belief).
Event based system can be more performant in some cases and slow in another cases. If there is not much opportunity for CPU to do any work, then event based system will often outperform threads. One example is proxies. I already gave haproxy as an example, so I'll repeat it here as well. It is single threaded event based by default. It is certainly performant. Why? Because in a simplified model it just shuffles data from one socket to another. Pretty straight forward. Introducing multiple threads and context switches might just thrash caches around and actually make it worse (I have seen that happen).
Now add some CPU work in there. Say make each connection compute something, serialize some JSON. Like in those benchmarks, they use a DB driver get a row, serialize it and return. Ok there is some work. Now it is more likely that multi-threaded will help. But again one can surely tweak CPU affinities, thread pool sizes, hyper-threading BIOS settings, db driver types to really change things up. Threads take up memory. Not an insignificant amount. Now I like green threads, Erlang's processes, Go's goroutines because they are lightweight. (At least Erlang's processes map N:M to CPUs for parallel execution on the host machine).
So I guess my point is you are right that event based are not always and strictly more performant. But I also think in certain cases it can beat multi-threaded code (thread memory size, context switches, cache thrashing). That benchmark there, I wouldn't take it too seriously just like I wouldn't take Language Shootout too seriously.
The whole event-based dogma is that event-based systems are not merely performance-competitive, but performance dominant. If they even tie, but also incur the extra development expense of significantly-increased code complexity, they still lose. If the event-based systems can't stomp thread-based systems in a benchmark, they're unlikely to do it in the real-world either carrying around the extra baggage of complicated code... it's not like event-based code scales up gracefully in size as the problem size increases whereas the (modern [1]!) threading approaches explode in complexity, what with the truth being the exact opposite of that.
Taking benchmarks too seriously is a problem; dismissing them too cavalierly is a problem, too. Those benchmarks may reflect the truth to seven significant digits... but based on what I see in there, I suspect they reflect the truth to about one and a half digits.
I've got some event-based code I manage at work, because it was the best choice. But it wasn't the best choice because of performance, or code complexity, or any of the other putative advantages of event-based systems, it was the best choice due to the local language-use landscape pushing me into a language in which event-based systems are the only credible choice. You know that comment that "design patterns show a weakness in your language?" I don't 100% agree with that, but it's true here; event-based server loops are a sign of a weakness in your language, not a good idea.
[1]: Here defined to a first approximation as "shared little-to-nothing" threading models, rather than the old-school approaches that produced enormous program-state-space complexity.
I agree with you, hopefully you see that, but hopefully you can also see why for heavily IO bound application event based systems (basically code woven around a giant epoll/select/poll/kqueue system call) can be faster.
Modern machines are different than those 10-15 years ago. Caches and SMP typologies sometimes play serious roles in what could be an outcome of a benchmark. Threads are often heavyweight memory-wise. That is why the 10K problem had started to be solved better by event based systems.
Even looking at your benchmarks link, I would say more on the top are actually event based. "cpoll" ones look like event based centered around a polling loop. So is openresty -- which is a set of Lua modules working in nginx, also an evented server (but it is also mixed with a set of worker processes from what I understand).
And I like what you said about even if they are the same threaded ones are better. Yes. Not only that, for me it is 10x. Even if threaded ones are 10x slower and that is tolerable, the I would rather pick that. Why? Because code is clearer and matches better which the intuitive breakdown of a problem domain. That is why I like Erlang, Go, Rust and Akka -- actor models just model the world better (a single request is sequential there are clear steps that work in one after another to process it, but there is concurrency between each requests). An actor models that perfectly and I like that.
I also, like you, dealt with an evented promises/futures based system for years and it wasn't fun. It works great for little benchmarks and examples, once it grows it becomes a set of tangled slinkies that only the original writer (me in this case) knows how it works.
> The idea that threads or processes are intrinsically slow was sheer unmitigated propaganda, and probably not only failed to contain a grain of truth, but are actively false.
Threads / processes:
* Run some code from A
* Save state, context switch
* Run some code from B
* Save state, context switch
* Deal with locking, synchronisation, etc
vs
* Run some code.
There is absolutely no instances where [num threads] > [num cores] is as efficient as not using more threads than cores.
Funny, then, you'd think the benchmarks would show that, if it's so obvious, instead of showing the opposite.
The problem is that once you understand what lies behind your glib "run some code", you understand what the problem is. I mean, for one thing, the idea that in a busy server switching to a different event handler which has neither its code nor its data in any processor cache is not itself a "context switch" is a use of the term not necessarily connected to any reality, even if one might pass Computer Science 302 with that answer. Alas, we can not convince our CPUs or RAM to go any faster by arguing at them that they aren't making a "context switch".
But, you know, it's an open benchmark, and the benchmarks themselves aren't all that complicated. Do feel free to submit your event-based handlers that blow the socks off the competition. Bearing in mind that is the standard you've set here. Merely competitive means you've still lost. Nor do I see any "but benchmarks don't mean anything" wiggle room in your statements, because what you're talking about is exactly what is being benchmarked.
The linked article didn't make this clear, but this feature is mainly designed for process-per-core models, not process-per-connection. The problem you run into with most existing process-per-core systems is that you can't ensure an even distribution of load across the processes without introducing extra overhead. SO_REUSEPORT offers some convenience when changing the number of processes, but the real benefit is that in this mode the kernel uses a better load-balancing scheme.
To hide I/O latency. You cannot do this effectively without threads without implementing your own scheduler, unless your I/O delays are constant and known a priori.
Tech doesn't exist in a vacuum disconnected from the world, it's constantly interacting with society as the two evolve with and by eachother. It has the power to fundamentally alter the world, the most grandiose among programmers and entrepreneurs will even voice that as a goal.
Large scale monitoring of computerized communication infrastructure. The cat and mouse implementation of crypto technologies in response. The ethos of the people on the forefront of computer privacy. The fact that the leaker was a sysadmin. All these dots connect to form a larger story that is just as much about tech in society as it is politics. This article in specific and Snowden's leaks in general may be more on the social side, but is still a component of the larger story that is fundamental to tech.
This is a defining moment in history, one which will shape the digital environment in which we all operate for decades to come. By the time the last echoes have fades HTTP and SMTP will likely no longer exist, every last bit of every communication will be encrypted and the general public will be about as paranoid as the most tinfoil hat type of 2 years ago.
All it takes for that to be the case is a few more things to happen:
- someone leaks a substantial body of cleartext records on citizens
- ditto on some foreign head of state / politician / judge
- ditto on an American politician
The term 'plaintext' will be as antiquated as 'morse'. Still occasionally in use but not for anything that matters. Intelligence agencies will be reduced to traffic analysis and likely not even that with a vast chunk of the internet simply going dark, either as a mesh network or in some other decentralized fashion where there are no more supernodes such as Mae-East, Mae-West and Front 151.
The other alternative is not so much fun so I won't outline that here. There is a good reason why 'may you live in interesting times' is considered a curse.
The fall-out from this will affect every hacker, every start-up and likely every company operating at the moment with even a peripheral interface with the digital world, which is probably all of them.
It's really not a defining moment, in any sense of the word.
Same old, same old.
In actual fact though, The independent has stated that it wasn't leaked anything by the government, so the original post is moot. This is no better than gossip about celebrities.
So governments monitor the internet. Wouldn't it be really bizarre if they didn't monitor the internet?
> Personally, I hope Tesla fail, and think they will.
I think that without any kind of justification, this is completely unnecessary for HN, and downmodded you accordingly. Even though I agree that many comments here are one-sided, and haven't considered what the PR statement might be missing.
> Personally, I hope Tesla fail, and think they will.
Maybe, but you need to realize that the electric car is an obvious and timely technological development, and someone will produce a successful mass-market electric car. Why not Tesla? It's not as though they're making a lot of mistakes.
I don't believe electric cars are technologically better. Lugging a half ton battery around, then waiting for hours for it to charge doesn't seem that clever to me.
Of all the things that are banes of our lives these days, surely it's everything that runs on a battery.
Remember when phones would last a week or so on a charge? Now you're lucky if a smart phone lasts a day on standby. And people call it progress...
So no, I think the idea of having a massive battery in my car is horrible.
> I don't believe electric cars are technologically better. Lugging a half ton battery around, then waiting for hours for it to charge doesn't seem that clever to me.
Early internal combustion engines were also rather embarrassing, but this didn't hinder their adoption -- at the time they were a better choice overall.
Imagine the reverse situation -- imagine that electric cars took hold when they were first introduced in the early 1900s and saw a century of improvements. Then someone comes forward and says, "We have an idea! Instead of charging your battery all the time, you carry a tank of explosive liquid fuel with you wherever you go, and you burn the fuel as you drive."
Present battery technology is pretty terrible -- not very efficient, too heavy, low energy density, short life. But widespread adoption of electric cars will force technological improvements, just as happened with internal combustion engines.
If batteries improve -- greatly -- it will become self-evident that carrying a battery around is a better choice than carrying and burning liquid fuel, both for the environment and in a simple economic sense. At the moment, electric cars aren't an obvious improvement over an internal combustion car, but I think that will change.
In a hypothetical future with more wind and solar energy sources, and possibly fusion power in the long term, electric cars will make more environmental sense as well.
Take an AA battery. Now go back in time 30 years and look at an AA battery.
Identical. Why has battery technology not improved one bit in the last 30 years? Well, obviously there's a massive disincentive - the better the battery, the less people buy, but I don't think that's the main limitation.
I don't think conventional batteries can improve all that much more.
> Take an AA battery. Now go back in time 30 years and look at an AA battery. Identical. Why has battery technology not improved one bit in the last 30 years?
That's completely false. I might have said, "Look at a basic mousetrap 30 years ago. Now look at one today. Identical." What's missing is any examination of the alternatives. 30 years ago, there weren't any NiMH batteries, or commercial lithium-ion batteries (the latter were under active development), but they're now the primary power sources for portable devices, and lithium-ion batteries power the Tesla Model S.
> I don't think conventional batteries can improve all that much more.
And I don't think conventional thinking can improve all that much more. But I have high hopes for unconventional thinking.
"When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong." -- Arthur C. Clarke
> lithium-ion batteries (the latter were under active development), but they're now the primary power sources for portable devices, and lithium-ion batteries power the Tesla Model S.
What does that mean for the user. Do batteries today last twice what they lasted 30 years ago? Nope. Are rechargeable batteries any more viable today? Nope.
Even batteries in laptops only last a year or so before they are just dead and need replacing.
If you think battery technology has really massively improved in the last 30 years, please let me know what real world improvements there have been...
> What does that mean for the user. Do batteries today last twice what they lasted 30 years ago? Nope. Are rechargeable batteries any more viable today? Nope.
You are flat wrong on both counts. Modern batteries provide much more energy per size, weight and cost than their rechargeable predecessors. As to "viable", how can you make any kind of claim about batteries that didn't exist 30 years ago?
> Even batteries in laptops only last a year or so before they are just dead and need replacing.
Yes -- compared to no batteries and no laptops, 30 years ago. What kind of comparison do you think you're making?
> If you think battery technology has really massively improved in the last 30 years, please let me know what real world improvements there have been...
Today, batteries exist, and applications exist, that did not exist 30 years ago. Modern battery applications could not be filled by the technology that existed 30 years ago. How difficult is that to decode? Thirty years ago, the Tesla Model S could not exist, period, full stop. The battery technology didn't exist.
My first laptop was an Amstrad PPC512 in the mid 80s - almost 30 years ago. I'm still not convinced batteries have come that far since then.
I'm actually thinking of going back to my nokia which has a battery that lasts a week on standby. Compared to modern smartphones which last a day.
You're right though - Inefficient bloated buggy software is becoming the driving force for requiring more power from batteries - going back to my original point - I don't want software running my car.
The above graph shows that batteries have improved enormously in the past 30 years -- they now have more than twice the energy density they had then. There are few products that have improved so dramatically. And more improvements are in the pipeline:
Too late. Every car since about 1993 is entirely reliant on electronic computers, and many were even in 1980. Most of them reprogrammable, though usually cumbersome to do so.
Too late. Every car since about 1993 is entirely reliant on electronic computers--and many were much earlier. Most of them reprogrammable, though usually cumbersome to do so. Changing software during service is common these days--and the dealer may not even bother to mention it.
Tesla's use of software is pretty normal. Go sit in BMW 5 series, or a Ford Focus. All soft interface. Tesla's is just nicer, and not afraid of taking advantage of the fact that everything is software already.
Tesla are actually building cars because the world is running out of oil, so if you want to drive in the future, then the car has to be electric because the price of oil is too expensive.
Firstly, I don't believe electric cars are a good idea.
Secondly, I absolutely hate the idea of cars so dependent on electronics and software.
My neighbour has an old dumper truck he lets me use, probably 50 or 60 years old. It starts every time, first time. There is literally nothing to go wrong on it.
Now contrast that with a modern car, where you need special tools to access the electronics unlock diagnostics. What about when cars have auto update software over wifi? What about when the government forces car makers to embed their own tracking software into them to monitor and spy on civilians.
Tesla isn't quite as bad as the idiotic self drive cars Google is pressing for (So they can drive you to a google advertiser), but they're in the same bucket of nastiness.
Thanks for answering. I respectfully disagree, but we can still have a civilized conversation.
> My neighbour has an old dumper truck he lets me use, probably 50 or 60 years old. It starts every time, first time. There is literally nothing to go wrong on it.
I guess apart from modern safety features and gas guzzling?
> What about when the government forces car makers to embed their own tracking software into them to monitor and spy on civilians.
Not necessary. People already carry cell phones.
I am actually looking forward to the self driving cars. Human error causes lots of accidents. But then, I don't own a car and probably never will.
I'm not sure there is. I'm not sure one can be truly sure he scanned the Internet before impersonating every host. Can't know anything before trying out the inside of every skin.
After all, what would you know, as a traveller, about simple lives of local people?
I've spent one hour of my life in Germany, when I was 11 years old, in a transit lounge in Frankfurt. I have 'visited Germany', but not in any real sense.
There's more to the internet than just port 80, so to declare that a scan encompassing only a single port on each host is a scan of "the entire internet" is somewhat mistaken.
The more correct title would be, "a scan of the entire World Wide Web."
"We experimentally showed that ZMap is capable of scanning the public IPv4 address space on a single port in under 45 minutes, at 97% of the theoretical maximum speed for gigabit Ethernet and with an estimated 98% coverage of publicly available hosts."
I realise that my comment was not so clear, sorry about that. Yes, to me scanning the whole internet means at least the full port range in TCP (and why not UDP too).
My 'rant' is really about the article sensational title promising to let you know about the result of scanning the entire internet really fast... wich turns out to be about scanning web services. The data is however interessting.
Leukemia doesn't affect many enough people in reproductive age to cause a population explosion. Basically, no medical advance can cause population explosion in the rich world: there are very few people dying from disease in reproductive ages there. Making people who would otherwise die at 55 live through 95 does not make a population explosion because they won't have any kids anyways.
Advertising on Reddit is not the same as advertising in general.
If you advertise on Reddit, you're advertising to a violently anti-corporate anti-advertising audience, who may love you, but very well may hate you. You could be subject to a witch hunt at the drop of a hat.
I very much doubt advertisers would be lining up to advertise to that crowd. They're hardly big spenders either.
This is an unsubstantiated claim. Some of reddit's communities are like this, but most are not. If you're just viewing the front page, you are viewing the lowest common denominator, which could give you this impression. But reddit is a very heterogenuous community.
For me personally, it ends up being far more annoying. I really wish wikipedia would just put small unobtrusive text adverts on each page rather than the massive intrusive banners begging for money.
There is an issue of who calls the shots -- if you solicit donations from your users, that's who you are beholden to and need to serve to get money. If you are soliciting third party advertisements, that's who you are beholden to (and if you are using a third-party ad placement service, you are beholden to them as well as, perhaps more than, the actual advertisers.)
I really wish wikipedia would just put small unobtrusive text adverts on each page rather than the massive intrusive banners begging for money.
Hi, welcome to your first day on the Internet. Since you're new, let me tell you how things work around here.
There are probably dozens of web sites similar to Wikipedia. But Wikipedia is on the first page of search engine results for just about anything you search for. Why is that? Because people have learned that they can trust them over the last 12.5 years.
When you go to Wikipedia, you know that when you're looking for information on the Battle of Hastings that you aren't going to see ads for anatomy enlargement pills. You won't see any advertising at all in fact. You know that the community at large does a decent job at removing biased information. You know that a company can't buy their way into hiding negative information or promoting positive information.
This level of trust is what causes people to link to Wikipedia thousands of times per day.
So let's say Wikipedia takes your advice. They put a small unobtrusive text advert on each page. Suddenly you're searching for information on acne and an ad for "Acbegone" pops up that promises to cure your problem for 3 easy payments of $19.95. Acbegone ends up becoming a huge advertiser with Wikipedia - spending $1 million per month on advertising. Suddenly Wikipedia gets The Phone Call. "Hi, this is Acbegone. We'd love to continue advertising on your site but your article on acne mentions 10 other products. Get rid of those and we'll double our ad spend with you. Don't get rid of them and we'll be forced to stop advertising." Wikipedia can't make do without the income they've become accustomed to so they make editorial decisions to not mention any product - but still there's that ad from Acbegone. Suddenly Wikipedia seems like one huge cheesey ad. People stop trusting it. People stop linking to it. It stops coming up in search engine results.
Everyone goes to Google.com to search for things. You know that when you do a search, you're going to see helpful related adverts.
That level of conflict of interests and possible abuse, privacy concerns etc, means that the entire world uses google as their search engine.
Oh and they make billions in profit.
Your hypothesis about an advertiser asking wikipedia to alter content surely applies to google search results.
Your hypothesis about an advertiser asking wikipedia to alter content surely applies to google search results.
Google indexes other people's content. All Google has to say is, "Sorry, we're not in control of the content others make, our automated systems follow an algorithm we're unable to make one-off tweaks to." It could conceivably cost Google $1MM to make a one-off tweak to their algorithm in terms of programming and testing time.
Wikipedia on the other hand is all content. They have no plausible response other than, "Yeah, it would take 5 minutes to update that but we won't do that for you." Hell, all they'd really have to do is let the advertiser update it as they want and then instruct editors to do nothing.
It really is just different for this and a number of other reasons.
The problem is, if wikipedia did alter articles based on advertisers demands (Which seems pretty far fetched to me), the public would just alter them back. Or see the edits wikipedia is making and put 2 and 2 together.
>and then instruct editors to do nothing.
Yeah good luck with getting wikipedia editors to comply with that request!
A site like wikipedia would likely have thousands upon thousands of advertisers. They wouldn't be dependent on a few big advertisers. If an advertiser came to wikipedia and asked them to change a page, wikipedia would just say "no", publish the details to make the advertiser look like a douche (cue internet witch hunt, boycot naming shaming etc), and not care about the 0.000% temporary drop in revenue.
Look back in time. Read up on other things that had a massive userbase, but were unsustainable.
Checkup alladvantage - they paid people to surf. Had millions of users, but ultimately failed because their "business model" was idiotic.
Getting millions of users is pretty easy if you pay them to be a user. Someday though, it's only worthwhile if you can build a sustainable business which at least doesn't lose money hand over fist.
If Reddit hadn't got bought and supported by other profitable businesses, I doubt it would have survived.
AllAdvantage was great for free money as a teenager. It only took a few minutes to slap together a VB application to move the mouse a few pixels every minute. I made a few hundred dollars from them while I slept.
That was more of a general statement than a comment specifically on reddit's position.
I can make millions of dollars selling condom wrappers, but just because I have made millions of dollars does not mean that I have done something important. I may catch quite a bit of hate for this, but a large portion of HN's content is on things that make money, but are not truly important.
I think the mantra from the startup community is often:
1. Make money doing whatever it takes. eg come up with some crappy website, sell it to google, then shut it down.
2. The money problem is solved!
3. Spend money solving world hunger, diseases, philanthropy.
If those outdated figures of ~$50k/month are true, I wonder if they could move to their own infrastructure or dedicated and turn some of those savings into getting from red to black.
We operate a dozen of our own colos, with a virtual colo on AWS for insta-scalable multi-region redundancy, and an Amazon "colo" costs the same as about eight of our own when spun up and serving at least a gigabit of traffic.
However, the difference is less if you're going from zero sys admins to 24/7 says admins. I'd SWAG the crossover is once your AWS budget exceeds 4 full time sys admins willing to do shift work.
/r/all is the internet’s largest right-wing community, on any manner of subjects from race relations in America, to multiculturalism in Europe, to feminism and women’s rights anywhere. Last time I visited was around the Zimmerman verdict, and I couldn’t decide whether the conversation on reddit more closely resembled Free Republic or Stormfront—the major difference being that neither of those other right-wing communities can match redditors in their hatred and fear of women.