You can sign in or click the skip sign in button. From there, you can live chat, email, or have them call you.
I found that within about 10 seconds of clicking around on their site. Admittedly, it required multiple clicks, because it's a bit buried, but it -is- there.
I want to share an experience that I had a few months ago, to see if anyone had something similar.
I have an AWS account that I use to upload stuff to Glacier and leave it there. My monthly fee comes to about $0.20/mo and that's been going since around 5 years (previously it was just on S3). The data I store there is not encrypted (not that they would be looking at it, right?), nor is anything sensitive/illegal at all. Aside from that, I ocasionally spin a cheap EC2 instance to test some new binary before installing it or things like that.
Anyway, around March I received three emails (they were spaced like 6 hours or so apart so I've read them all at once) and the subject was something like "Important Notice regarding your AWS Account, Urgent! Open Now!". The first thing I thought was sh*t, my account was hacked and now I owe a million dollars to AWS.
To my relief that wasn't the case, but they wanted me to send them many documents that I consider personal and for no reason at all. I replied something like "Is something wrong?" and they said it was standard procedure, which is weird because I've never knew of anything like that. Things eventually went to "send us a scan of your passport or we will terminate your account", passport because that's the only ID I told them I had. I eventually told them to piss off, I figured that $0.20/mo and the things I had there are not worth the worry of sending that data to someone hiding behind an email. They didn't reply anything and then... nothing happened. It's been half a year since and everything is business as usual.
It was weird, and I never could make sense of what they really wanted, but anyway, just thought of sharing that when I read this guy's experience.
Edit: Just checked them, I have them right here, most of them are from payments-verification@amazon.com, they wanted ID to verify the card details or something like that. Which I found it really weird because the account has been running for years and card's haven't expired or been changed.
Wow, the C1 looks awesome but I would be really hesitant to put my life into one of those things. Thing is, if the car you're on loses power you are pretty much safe all of the time, if your bike loses power you may have some trouble but you would basically be able to stop and get off without dying. If the C1 loses power, goodbye gyro and that's the end for you. :(
If they made the same vehicle with some kind of device that solves the stability issue in a passive way I'd be totally in.
Not at all. When you ride a (motor?)bike you still do a lot of work by yourself to keep it balanced, thing is, you become so accustomed to those things that its easy to think that they happen "by themselves" but nope.
Try turning at a relatively high speed without leaning correctly and we'll see if that wikipedia article saves you from the road rash. In these proposed vehicles you are giving control of that to some hardware that "supposedly works" and software that "supposedly works".
To each his own I guess, I wouldn't ride one of those, let alone everyday, let alone on a highway.
I think you may have some misconceptions regarding the physics of motorcycles. You do not have to lean at all to turn, if you don't want to. Obviously the combination of you and your bike must lean, but you yourself can maintain a perfect upright posture if you'd like (and let the bike "do all the leaning"). You won't be able to turn as sharply, of course, but it's NOT the rider leaning their body that causes the bike/rider system to lean, it's purely a reaction to the countersteer you apply to the bars.
When I say "you do not have to lean at all to turn", I was not clear. What I mean is that you can look like the left photo, rather than the right photo: http://i.imgur.com/K4vTdDm.gif
I read your comment to imply that the rider's body lean is what initiates a turn, rather than their handlebar input. Or that the rider must actively lean (their body), rather than allowing the bike to lean them over.
Your other reply to jsprogrammer says that "it's pretty obvious that the passenger needs to be shifting its weight to achieve that." But that's precisely what I'm saying is wrong. No weight shift is required at all to do any street-riding turns. When the handlebars are steered, the bike will force you over. When a seated driver of the C1 turns the bars, the seat will move their body over, they don't have to do any work.
If you're on a racetrack, then yeah, shift your weight and eek out sharper turns, but you'll look silly shifting your weight around when riding on public streets or a highway.
What? The diagram is not about how a bike turns, but rather how the vehicle moves when the wheels are rolling and the passenger does nothing: it doesn't fall over if the wheels are rotating at sufficient speed.
What? I saw a bike leaning and turning. The passenger is "doing nothing" because it's just a checkered ball without animation, but that it's not how bikes normally behave (assuming a "passive passenger" as they say), it's pretty obvious that the passenger needs to be shifting its weight to achieve that.
My bike stays up while moving forward even if I'm not pedaling. Forward momentum works wonders. Once it's slow enough to fall over, it's too slow to kill or even really injure you. I know you're arguing against the "losing power would kill you", but I would go farther and say that "it very well might" is only really applicable if you stall in the middle of a highway and get hit by another car, in which case even a Volvo would struggle to keep you safe.
Well, my main doubt is in your ability to effectively control the vehicle unpowered. This could be dangerous in many instances: on bridges/overpasses, steep roads, roads with steep drop-offs near the shoulder, etc. Stalling at highway speeds with other vehicles would not be good, but you may be able to manage it.
If you do something new and it's popular and shows potential to be a big business, other people will compete with you; that's how markets work. Some of those people will try to innovate and out-pace you, and others will simply copy you. Some will be scrappy startups, and others will have deeper pockets than you; that's inevitable.
When Facebook launches streaming video, should we be angry at them since they have way more money/users than Twitter, and because Twitter did it first?
If Meerkat has built something defensible, then they'll be fine. If not, nobody should expect them to just be deemed the only company that can do streaming video. During Twitter's history, many people tried to pop up and knock them down to take the lead - but Twitter was tenacious and had an awesome product, and they won.
It's a long game and Meerkat can still win it. If one early competitor is enough to crush them, then they probably weren't going to make it anyway.
My experience: because you can probably still do it better.
Google, Facebook, and Twitter are far from omnipotent. Want proof? Just look at Google's failed attempts to get into TV advertising or social networking. Or Facebook's attempts with Messenger. Or Twitter's... everything, really.
Also, consider, it's unlikely any of them would copy an idea until you've already made it a success, at which point network effects can work in your favour (again, Google's repeatedly aborted attempts in social networking are illustrative, here).
To summarize: they might try to copy you, but there's no reason to assume they'll succeed.
Here's something @sama said about this in a lecture:
> "The best thing of all worlds is to build a product that a lot of people really love. In practice, you can't usually do that, because if there's an opportunity like that, Google or Facebook will do it. So there's like a limit to the area under the curve, of what you can build. So you can build something that a large number of users like a little bit, or a small number of users love a lot."
If you do it right, and get lucky (timing, network effects, factors beyond your control), you might be in a position to stand alone against them or be acquired by them.
Well, back when I was first starting to understand computing (Pentium II days), the phrase we always kicked around was "Do something great that Microsoft is terrible at, then wait for them to buy you out." That kinda-sorta applies in this scenario.
MSFT/GOOG etc... could always decide to "un-suck" and kill some new hot thing, but it's a deus ex machina and not something you can predict, so it shouldn't really even be a consideration.
...except when it's literally not always the case.
I get where you're coming from but deep pockets and an army of engineers doesn't guarantee success. Google+ never got out of the "it sucks" stage into mainstream success. The legacy of Steve Jobs is also littered with fanciful ideas that never got traction. I guess my point is there is a point to innovating and attempting to try, even as a small player, because there's really no guarantees.
I can't recall the exact story right now, but I recently heard that a well-to-do / highly funded flight project (Langley?) failed, and a short time later, the shoe-string operation by the Wright Brothers was successful.
I think you are missing my point or I did not make it clear because we are in agreement.
"MSFT/GOOG could just do it and kill your [service]" is a very common response to anyone who is launching anything new or groundbreaking and it's also a misguided response.
So worrying that BIGCO will do that is basically a worthless concern and saying that because something is obvious in retrospect that it was inevitable is not particularly useful.
That said, they still can do it and on the rare occasion when they do, instead of doing an acquisition it usually does kill the newcomer. So to the original point, it seems like it's actually becoming more common than less for them to spin something up rather than acquire.
Everything's fine, everything said so far here it's fine. But I was around here when that happened, which was not that much time ago so I find it weird that you guys don't remember.
Twitter pulled some very dirty stuff on Meerkat, merely days before launching Periscope and later, in order to favour their own Meerkat clone. That's when everything changed from 'it's a free market' to 'it's our market and we can kick you out for no reason'. And yeah, it actually is Twitter's market, they are allowed to do anything they like with it and "monopoly regulations" aside, they can kick you out of their for no reason.
But let me tell you what I don't find ethical about all of this. That when you are going to develop for them, they do not tell you straight away: we are going to fuck you up if what you do turns out to be valuable; quite the contrary, they just put out their "we-are-so-cool-and-we-love-developers" face and invited everyone to be part of their platform blah blah blah.
That started around 5-6 years ago, and probably you (selectively) won't remember this either, but back then the first killer apps that Twitter had were third-party Twitter clients that made the whole user experience better. They were a hit because back then, too, the main Twitter UX was a piece of s... and these alternatives were so much better. And guess what happened to those products? Twitter pulled the same shit on them, they bought TweetDeck which was arguably the one with the most users (or the one that needed the money the most), and banned all other clients from their API with a week-or-so notice. At that time, that thing was even worse, because I would even dare to say that Twitter somehow was in debt with those developers because a significant contribution to the growth of twitter was driven by people going through all these third-party clients, that's plain treachery and there's even a special circle in Hell for that thing.
Now history repeats again and will repeat forever because the message from Twitter in that sense is very clear: we don't give a shit about our developers.
But now, it's good to see that things are finally turning around. Twitter desperately needs a killer app right now, because they've lost the hype, they're running out of money and investors don't eat anymore that "we will be profitable in ten years" crap. The only one way that Twitter could be hip again is, oooooh the irony, if another great product starts driving people back into using Twitter, but guess what? No developer will ever take Twitter seriously anymore, and that's just and sweet beautiful karma :)
Btw, it would be honest for you to disclaim about your relationship with Twitter :^)
Seems to me that Twitter built a truly massive social media platform with hundreds of millions of users interacting with short real-time updates in a semi-public manner. An accomplishment nearly no other company has ever done.
Meerkat had the remarkably obvious and unoriginal idea of having this hyper successful platform also host some live streaming video. In implementing this obvious extension to the Twitter platform they ignored the effect of their intrusive updates on the UX for everyone else in violation of the terms of service.
Twitter decided that the experience of posting media on Twitter is their department and acted accordingly. In the process Meerkat's founders and investors were deprived of the opportunity to become billionaires in exchange for a few months of development and a trip to Austin. My heart breaks at this injustice.
Meerkat had the remarkably obvious and unoriginal idea
The timing of the meerkat -> Periscope launches indicates that your analysis of how "obvious" and "unoriginal" it was is probably wrong. It's not like this was a big thing before meerkat came out, not that they were the first, but meerkat essentially made the market.
Do you think they have a special deal w/ Amazon? Otherwise I imagine that's a really expensive bill.
Do you think they're doing it because its cheaper when you account for things at a larger scale? Like, reduced liabilities, payrolls, CapEx, whatever? Or is it more expensive but still "easier" and they go with it?
I really REALLY would like to hear everything from everyone about this, because right now I'm facing that decision for my company. I'm not as big as Netflix obv, but it is a big deal for me. It is really attractive to have all the scale you need at will and forget about maintenance at all; even if it is more expensive I would gladly pay as much as double my operating costs if I can rest easily at night. I would even eat as much as 10%-20% of my profits if I don't have to deal with that.
But the devil is in the details, I think is relatively easier to go full-cloud than to later find out it was a mistake and try to migrate back to your private premises. So, what could be the downsides of going full-cloud just as Netflix did? Also, taking into account that I'm not a personal friend of Bezos neither another Netflix, I would be just another smallish guy and I'm pretty sure that won't get me all the perks that come with the former.
You should talk to people who have done this IRL, not on HN. This is a difficult thing to pull off, and like any complex decision, there's a lot of trade-offs. Without knowing your specific situation, it's fair to say that volume helps out Netflix a lot, and you won't get anything close to the unit costs they have.
The major thing that sways people to one side or the other is the kind of problems they want to build an organization to cope with. Do you want to build an engineering team that can handle a whole availability zone in Amazon going out, or do you want to hire one that eeks performance out of bare metal? Do you want a sales organization that has to go to the mat over security details, or a marketing organization that has to cope with justifying why "you're not a cloud company, you don't use the cloud" in their brand? There's going to be bumps either way.
Source: I used to sell monitoring software, so I got to talk to a lot of people making this decision. We tended to find big opps when people made the move.
As someone who works on a far smaller video platform than Netflix commands, I can definitely say that AWS has given us plenty of freedom in places that planning our tiny slice of a DC has not. We dabble with dedicated hardware, but it takes so much more planning and budgeting - mostly of human resources. The budget we stick to is barely monetary, as it eventually gets quite a bit cheaper to run things (especially video transcoding) upon metal, but hours and mindspace are far more costly. The things we have started moving to metal are tested on the cushy and forviging pillow that is "AWS". We build and monitor there long before we consider moving the services to our own hardware and that conversation includes total cost, including those that are sunk into the massively larger upfront investment (buying / maintaining servers).
That said, we try not to build our system _within_ the world of AWS, but rather "on top" of it. Our backend software is written to run on any recent ubuntu distro, and moving elsewhere, whether it be another cloud platform or dedicated hardware, is more of an Operational issue than something requiring significant changes in our soft machinery. For instance, we stick with our own messaging services rather than SQS as it's more portable. And our databsaes are RDS (MySQL). I assume this is why AWS sticks to well-used protocols for its larger services, because it allows us to use their services with an open mind and not worry as much about vendor lock-in. This gives us room to grow and worry about the expense when the expense becomes a valid issue.
We're allowed to build on their platform without worrying about whether we can move on to another, provided we keep that in mind. As it stands, we're almost completely on AWS, but I run our entire platform on my [ubuntu] desktop without issue. The Ops are different, as I'm installing our software and dependencies manually, but the software itself can live on any server. And even as we consider dedicated servers, we keep in mind that we may want to continue using AWS for scaling during spikes, and so we continue to write our software and manage our systems accordingly - keeping in mind the idea that it should run on any system that meets our minimum requirements, whether it be bare metal, managed, or in the "cloud".
This depends on your current infrastructure, your expected growth over the next few years and obviously your software stack. In my experience, unless you are doing something with extreme latency or scaling requirements, AWS is likely cheaper in TCO, at least in the beginning where your fleet is less than a few hundreds machines. Also note that AWS is not just EC2. Other services like S3 or Dynamo can be life savers. The downside of AWS is that you can become over reliant of it.
Last first: Getting enough competent people to whatever you have. Just getting people that knows the difference between rm -rf / and rm -rf ./, not to plug both ends of a cable into the same switch/router, ... can be quite hard. If you have a premium product running on a few hundred machines, AWS doesn't actually eat that much margin compared to having your own 24/7 staff & all that entails.
And when the company then decides to open up "somewhere over in that big asian market", you've got to do it all over again. You can get a lot done with "remote hands" and whatnot, but again - something that eats your margin, just as AWS does...
you still need ops guys just not hardware guys. for a smaller company it would be the same guy anyway no? a company with 10 or 20 servers doesn't need a dedicated hardware tech.
If you watch any of Cockcroft's presentations you can see that there's nothing easy about it. They developed a significant amount of tooling just to work around quirks in EC2, but apparently the elasticity is worth it.
Assuming Amazon can do it slightly cheaper than Netflix could (i.e. Netflix doesn't have to duplicate Amazon's fixed costs, such as engineering), and that Netflix has enough bargaining power to pay Amazon very little over what it costs them to provide it....seems like an economic decision to me.
It depends on your IT architecture and more importantly on your software architecture. I did this IRL and I did it both ways and I'm doing it again.
If you want some ideas or guidance or want to ask specific questions - feel free. My email is in my profile, "about" section.
Disregarding the fact that the E stands for Elasticity...
Yes, Elasticity is THE advantage for Netflix when it comes to using EC2. Consider the vast differences in usage they have in North America during the evening after dinner vs during the middle of the night. I can only imagine their fleet capacity goes up and down drastically every single day. This amounts to huge savings over having the bare metal.
At work, we use a dedicated server provider, so we're; when we identify faulty hardware, we file a ticket, and they swap it out. We still need to have enough bare metal machines allocated so we can turn one off while they fix it, etc; but we don't worry about staffing or supplying the datacenter.
I wouldn't want to switch to AWS from this; bare metal plus a good account manager gives us a lot of visibility and reliability; Netflix blogs a lot about the chaos that is AWS, and we save a lot of effort by having machines that for the msot part just work, consistently; instead of having to deal with instances where performance is variable, etc. Running our own datacenter might be nice for some reasons: more control and information about the network, ability to make hardware choices that don't match the providers offerings, etc; but running a datacenter is a big change.
Sorry, I don't get it - why would you consider using cloud servers if you don't have significant spikes in your traffic?
Regarding hardware failures - it's not that hard to arrange proper redundancies in your systems, and it will certainly be cheaper than outsourcing it to Amazon. Cloud only becomes cost effective when your spikes are several times larger than your normal traffic, and you must maintain good QoS at all times.
Because sometimes cloud servers are very cost effective, easy, and fast to deploy: Digital Ocean for example.
$40 / month for 4gb of ram, 4tb of transfer, and two cores, is very cost effective. Linode is also very reasonable.
If all I need is that scale of computing, what should I use instead that makes sense other than cloud services? Setting up colo or a dedicated box for that would be more expensive and a much larger hassle.
Arguably, Digital Ocean is more like a regular hosting provider with better onboarding. Plenty of hosters offer comparable infrastructure for the same price (you can even get something dedicated).
They're also far cheaper than AWS, where you pay a premium for the elasticity and access to the ecosystem. If there's anything to compare AWS with, it would be Google Cloud Platform, Azure and maybe Rackspace.
Well, I admit, those prices are pretty good. I looked at AWS pricing 3 years ago at a SaaS company with steadily growing traffic, and at the time it didn't make sense. It might now.
That really depends on a type of workloads and requirements you have. But basically you (your team) have to be smart about how AWS or any other cloud provider is used. No cloud buzzword alone will help you avoid cost, availability issues etc otherwise.
As for the "perks" i.e. discounts talk to AWS account rep.
This is honestly one of the few really good and creative ideas I've seen here in a VERY long time.
I really don't mind sharing an audio hash with someone that already has my full banking information/history in there. Not only that, I would be EAGER to do it if that will increase the security of the current contract that I have with them, specially since they found a way to do it that it's not bothersome at all.
By way of comparison, the single-threaded numbers for PageRank on the `twitter_rv` graph on my laptop are 1.5B edges in 5s, which is about 300M edges per second per core.
PageRank isn't doing much other than a load, a few += operations, and a store. The main reason it is slow is because memory is far away, but if you lay out the edges intelligently your rank data is usually in the L3 and then computers go fast.
Other posters have replied giving a back of the envelope as to feasibility, so from the other end:
The PageRank implementation on Dato's GraphLab Create when run on the Web Data Commons Hyperlink Graph (128 billion edges) does 3 billion edges a second on 16 nodes, which is 187 million edges per second per machine.
Given that communication overhead quickly becomes an issue for most of these systems and their graph is dealing with more edges, 28 million edges per second per machine seems quite reasonable.
Let's reframe that as ~1700 instructions per edge.
Seem more plausible?
These are 16 core machines with 10GbE links. Each core is processing maybe 2 million edges per second with highly data parallel instructions. Ignoring hyperthreading, if these are 3GHz cores (pretty conservative), that means you are burning ~1500 cycles per edge to process an edge. You've got 2 FP ALU's and 2 integer ALU's in each core, not to mention the AGU. You've got prefetchers and completion units that are designed to handle 4-6 instructions per cycle, and that's not for no reason. You can get a lot of work done in one core with just 1000 cycles, let alone 1500.
So I'm not so sure why that would seem incredible.
Not really on topic, but compare those numbers with the slowness of RAM and cache. A single shared variable that gets updated on one socket then another -- that can cost like 300 cycles, at least on Nehalem.
In terms of throughput, the RAM is great. It's the latency thing. Which is why intelligent scheduling with embarrassingly parallel problems like this makes such a huge difference.
Their test machines have 16-cores. They're running 2.9Gops/core. Plausible, yeah. I'm not used to see real things achieve almost 100% efficiency, sounds really good.
With 70 machines & 140 processors, they did pagerank over 6.6 billion edges in 35 minutes. Using 285% the machines, you'd expect them to be able to do it in 12.25 minutes. Moore's Law should put each machine at 160x more powerful, and with 300x the edges they are finishing in 19 minutes. So we've improved by 2x over an 11 year-old software stack that we've long since abandoned for better approaches, using a hardware stack that is much more optimized for SIMD computations than the 11 year-old hardware.
I really dont know the PageRank algorithm, but I was really impressed by their results, a trillion of "something" in 3 minutes. I thought that just reading things should take you a lot more than that but apparently not.
Data parallel problems are essentially the use case that should allow you to get to 100% efficiency. The hardware/software just aren't going to get it easier than that. In this case, I'd speculate that you'd not even need 50% efficiency to get this result.
As you do, I have a lot of programs and extensions installed on my machine. How about you install them all on yours? Come on! Don't be grim! They are fucking awesome and if you don't use them it's not like it is the end of the world :^)
It's funny, one of the other top comments here is about how many features Firefox is removing. Vital, core stuff, like setting being able to set custom user agents for specific domains...
I think the real reason many people are angry is that their demographic isn't catered to. I'm part of that demographic, and it does annoy me sometimes. However, unlike Debian/systemd, I find the tradeoff definitely worthwhile.