Congratulations to the Spanner team for becoming part of the Google public cloud!
And for those wondering, this is why Oracle wants billions of dollars from Google for "Java Copyright Infringement" because the only growth market for Oracle right now is their hosted database service, and whoops Google has a better one now.
It will be interesting if Amazon and Microsoft choose to compete with Google on this service. If we get to the point where you have databases, compute, storage, and connectivity services from those three at equal scale, well that would be a lot of choice for the developers!
> It will be interesting if Amazon and Microsoft choose to compete with Google on this service. If we get to the point where you have databases, compute, storage, and connectivity services from those three at equal scale, well that would be a lot of choice for the developers!
There are also plenty of choices evolving for developers who aren't looking for hosted solutions (which can sometimes be a showstopper for enterprise on-prem deployments). There's a growing ecosystem of distributed open-source databases to look out for too.
Take Citus, for instance – a Postgres-compatible distributed store which automatically parallelizes normal SQL queries across machines. It's as easy to set up as adding an extension, and people are doing some staggering things in prod with it.
Different audience from BigQuery and Spanner, but no less exciting.
Disclaimer: no professional association, but love their product and the team.
Craig from Citus here. Thanks for the kind words. We've seen a lot of people scale-out transactional workloads with Citus as well. In particular, we've seen a lot of multi-tenant apps that need to keep scaling beyond a single node when they're running into memory or compute issues.
If you are looking for something that is more Postgres flavored (meaning we're just an extension to it so you get all the good stuff of Postgres such as JSONB, PostGIS, etc.) then we hope we'd be a good fit. And we run a managed service on top of AWS as well (https://www.citusdata.com/product/cloud) built by the team that built Heroku Postgres. If curious on pricing you can find it at https://www.citusdata.com/pricing/
It would be very interesting to have your product at the 200-300$ pricepoint. Currently, the lowest tier starts at almost 2000$ per month for the high availability version.
I'm not trying to compare on a per-mb level, but it would be nice for smaller scale workloads.
Helpful feedback, we do have a development plan for $99, but it's not really intended for production workloads. If you only have 10 GB of data we'd heavily recommend going with something like RDS or Heroku Postgres. At that amount of data single node Postgres works great.
I really like your attitude towards something like this, when your product would be an overkill for a use case and you just recommend a different product. I also really like your blog posts about Postgres, we use it a lot for developers explaining a bunch of internals, like the one with how to paginate in Postgres.
> Its pretty much a monopoly, now that Google seems to have officially closed the book on ever supporting postgresql.
Uh, how? I wouldn't be surprised to see a Cloud SQL-like managed Postgres service from Google.
While there's obviously some overlap in the potential market for any relational datastore service, Spanner doesn't really overlap with a cloud Postgres service as much as Cloud SQL does.
I would be. its been years and google and has been building out various pieces of infrastructure around mysql - including cloud spanner.
the issue is that the migration path of self hosted mysql to cloud sql to spanner is pretty well defined. I dont see postgresql being strategically important or relevant to google for anything.
if I was a startup deciding on my database, there's a lot less compelling reasons to choose postgresql from the point of view of long term viability.
hell, I can pretty much do a back of the envelope calculation on how much will it cost me to support 100 million users on mysql.
Is it safe to think that Evernote and Snapchat - startups who are giant success stories - are google mysql hosted? (in some form.. maybe even spanner)
So uber, snapchat, Google, Evernote and a clear cut path for upward scale.
I have very less hope for postgresql on google cloud.
> its been years and google and has been building out various pieces of infrastructure around mysql - including cloud spanner.
What does Cloud Spanner have to do with MySQL? It's neither API nor SQL-dialect compatible with MySQL. If there are MySQL bits used somewhere in the implementation, they are well hidden, and irrelevant to users.
> the issue is that the migration path of self hosted mysql to cloud sql to spanner is pretty well defined.
So what? Were there a Cloud SQL-like Postgres offering, the same would be true; Spanner is no closer to MySQL than Postgres. (If anything, it's SQL dialect is a little closer to Postgres's dialect than MySQL's, though not so much that you'll get away without doing substantial conversion going from either.)
Right? Apps written to use Postgres aren't just going to be re-written to use Spanner.
If anything, hosted Postgres from Google Cloud will be priced in a way that makes Spanner some what more attractive, as a way to get conversions to Spanner in the long-run.
There are several managed hosting companies that will run Postgres (and other databases) for you on public clouds. Compose, Aiven, ElephantSQL, Database Labs, Heroku, etc. There are all kinds of price points and GCP is working on supporting postgres internally.
How many nodes are you looking to run for $300/month? Unless you have more than 150gb/node of data, you don't really need a distributed database which is what Citus is for.
not at the price point that RDS is at. the starting cost of a multi - az deployment is lower. For a startup thats just starting out, it is the best and safest choice. Even heroku, but im not very sure about its reliability versus rds.
Please note what im paying for is availability and reliability.. not for a database per se.
And im not even talking Aurora. That stuff is going to blow every other price point out of the water at probably higher reliability metrics.
As someone who works (in part) in the MS SQL field, is it irrational to be a bit worried about the effects some of these platform advances might have one one's career?
For example, being a MSSQL performance tuning expert requires years of experience and probably pays very well, but just the other day I read an anecdotal story where someone switched a large BI database to use columnar indexes, allowing them to replace very complex (extreme manual tuning to achieve acceptable performance) queries with just standard SQL with comparable performance.
How long until the scale, pricing, and now transparent & full(?) sql compliance offered by these cloud platforms starts to make traditional RDBMS platforms a niche platform?
Microsoft has a history of sales and support that will allow them a certain longevity. They also have less "brand hate" than Oracle. I dont think MSSQL is going to be like Sybase any time soon, but I probably wouldn't focus on that stack starting now if you are into the startup or california scene. For many places in the USA, MS is the way to go.
EDIT: Also, most DB users don't need global-scale databases.
Not sure if this is what you intended, but you are aware that SQL server was developed in partnership with Sybase until the mid-90s (when they were substantially the same product) right?
Neat history! I was not aware of that! I meant to imply that sybase's rdbms offering is no longer a big player and I would not want to bet the future of my career on being an expert in it.
I'm not saying MS is going to abandon the platform, but to me it seems entirely possible that "very soon" these cloud platforms using cheap, shared, commodity hardware might be so affordable and technically capable that it might be a no brainer choice unless you have a very good reason to use MSSQL (kind of the opposite today, on-prem by default, cloud if necessary).
> As someone who works (in part) in the MS SQL field, is it irrational to be a bit worried about the effects some of these platform advances might have one one's career?
There's a reason why regulated (including self-regulated) professions have continuing education requirements; progress happens and you become obsolete if you don't keep up with it.
Just because tech isn't regulated doesn't mean it's any more sensible to expect to remain valuable without keeping up with progress in the field.
That being said, MSSQL experts will likely have good-paying opportunities for quite a while, for the same reason thats then case for any well-established enterprise technology: lots of systems are going to be around using it long after it has become distressingly uncool to spend time learning.
I write software as a developer. This is how I earn my livelihood.
Four years ago, I determined that while development work might seem to be near the top of the food chain, there will at some point where my work will be replaced by AIs.
This is not so different from how word processors replaced the specialist job of typesetters. Word processors make "good enough" typesetting. You can still find typesetters practicing their craft; the rest of us use word processors and don't even think about it.
At the time, I was learning to put the Buddhist ideals of emptiness and impermanence to practice, and to become more emotionally aware: the _main_ reason I had thought I would never be replaced by an AI writing software has more to do with wishful thinking and attachment than any clear-sighted look at this.
I also made a decision to work on the technologies to accelerate this. Rather than becoming intoxicated by the worry, anxiety, and existential anguish, I decided try to face it. Fears are inherently irrational, but just because they are irrational does not mean it is not what you are experiencing. Fears are not so easily banished by labeling them as irrational. Denial is a form of willful ignorance.
Now, having said all that, whether our tech base will come to that, who can say?
Since then, I have been tracking things like:
Viv - a chat assistant that can write it's own queries
DeepMind's demonstration of creating a Turing-complete machine with deep learning using a memory module.
I watched a tech enthusiast write a chat bot. He does not write software professionally. Talking with him over the months when he tinkers with in his spare time, I realized that in the future, you won't have as many software engineers writing code; you would learn how to _train_ AIs when they become sufficiently accessible to the masses. Skills in coaching, negotiation, and management becomes more important then some of the fundamental skills supporting software engineering. And like typesetting, I can see development work being pushed down the eco-ladder.
It's not surprising to me to see that Wired article about how coding becoming blue collar work. And even that will eventually be pushed down even further.
It's not surprising to me about Google's site-reliability engineering book, branding, and approach. I have done system admin work in the past, and I can already see traditional, manual sysadmin work being replaced.
It's easy to get nihilistic about this, but that isn't my point here either. I know the human potential is incredible, but I think we have to let go of our self-serving narratives first.
I find this fascinating. There are a few ideas that are at play. One is the march of progress seeking to automate everything. The rationale of automation is to improve productivity. But what happens when everything is automatic? I don't see a corollary being played out at the moment. There are a small number of people reaping the benefits, and huge swathes of the population being marginalised and disenfranchised as a result.
The second idea that interests me is this idea of very high technology. It is built upon layer after layer of very clever tech year after year that I wonder how long it would take to start again from scratch if some disaster rendered a large part of one of these layers unusable.
For instance, if you were on a desert island, could you (would you want to?) build some piece of tech? An electric generator would be useful, perhaps. How long would it take to build? You'd need knowledge, raw materials, plant, fuel etc. It's not an easy solve. And that's way down the tech stack before you start talking about AIs. I suppose what I'm saying is, that the AI layer is based upon such high tech, that is inherently fragile, because it is so hard to do.
> There are a few ideas that are at play. One is the march of progress seeking to automate everything. The rationale of automation is to improve productivity. But what happens when everything is automatic? I don't see a corollary being played out at the moment.
I don't know! :-D
I don't know what society would look like from a purely technological point of view. From a spiritualist point of view, though, it could either go very well or very badly. When everything is automated, would people have enough time and space to really start asking the really big questions? Or would it accelerate and intensify existential anguish?
> There are a small number of people reaping the benefits, and huge swathes of the population being marginalised and disenfranchised as a result.
Yeah. Arguably, this has already happened.
> The second idea that interests me is this idea of very high technology. It is built upon layer after layer of very clever tech year after year that I wonder how long it would take to start again from scratch if some disaster rendered a large part of one of these layers unusable.
The stuff of sci-fi :-D Among them, alt-history novels (what happens when someone drops into a lower-tech era; you'd have to start from 0 ... literally, 0, as in Arabic numerals).
Open Source Ecology is trying to preserve some of this tech base. I find their aims awesome, though I am not sure how effective it is.
The flip side are things being spoken from well outside the techno-sphere, (for example, shamans and mystics) It is the perspective that the further evolution of human consciousness will, at some point, no longer require a technology or artifacts. Technology seen as the last crutch. The collapse of a high-technic civilization then sets the stage for a removal of that crutch, and humans learn to stand with two feet (so to speak).
Anything that you can easily specify and describe in detail can be automated. In practise the world is filled with computers that need programming to cope with ever changing chaotic actions of our users. Personally I'm long on software developers, despite being very excited by how AI has blown up lately.
Agree. I remember someone advising me against getting a degree in computer science, back in 2000. Argument was -- look at MS Word. what else you want to add to this? It has more features than you need.
Not a fair argument against the point made above, however, I believe we will find the next big challenge for software to solve as soon as traditional problems are commoditized/automated and considered solved. Also, just knowing how to code is not going to be enough. You must complement it with domain expertise to solve challenging unsolved real world problems.
I would say: quite long enough, but the scale/importance of traditional deployments will be going down through that time. If you're looking around the market regularly, you should have ample time to notice that gigs are not what they used to be, so maybe it's time to change area.
People are abusing databases like MSSQL to do things they may not be not good at. Large scale analytics is an example where databases like Infobright give amazing performance.
It'll be interesting to see how well customers will adopt this. When I was at one of the two companies you mentioned above, we tried adding global snapshots (a la TrueTime, which is the real innovation in Spanner not the clocks) and demoed it to our DBAs/MVPs. They didn't understand what on-earth was going on. Wanted something that worked with existing clients. We just gave them classic 2PC and they went home happy. I think that's the reason why Oracle will keep on chugging. There just aren't that many workloads that need this sort of scale. It is real cool technology though and we always used to wonder why Google wasn't offer Spanner as a service.
As a bit of a veteran in the database industry, I concur (at least about the impact on Oracle's database business). There is a lot of pent-up demand for anything that offers distributed consistency.
We've been seeing this demand at Fauna. FaunaDB offers distributed consistency, based on Raft and the Calvin protocol instead of depending on specific networking and clock hardware. We've seen a big part of our appeal is the ability to run FaunaDB across multiple cloud services.
The serverless cloud is pay-as-you-go. There is no minimum spend, unlike Spanner's $1000 per month (apparently). And it's cheaper than operating any open source on cloud hardware.
On-premises is licensed by core.
We have a developer edition you can use on your local machine, but we don't currently have plans to open source FaunaDB itself.
While I'm not familiar with Spanner's inner workings, I would guess that they recommend 3 instances for quorum establishment in case a region becomes unreachable. If that's the case, using fewer than 3 instances could cause major problems.
Since Spanner currently only supports single-region deployments, it clearly isn't recommended as protection against a region becoming unavailable.
It may be recommended as protection against an availability issue on an instance, though, which is, after all, a big reason why you'd want a distributed DB in production.
I suppose the loss of a region doesn't apply (yet), but yes, the quorum requirement would still apply even if you only had instances in a single region.
The movement from ownership to renting on the web is absolutely terrifying to me. Within the span of a few years we've gone from owning our technology to renting it out from a big players for monthly fees that we cannot completely predict or control.
The advantages of owning your own hardware will never go away, but soon this will be made quite intentionally impossible as the big players coalesce and continue building their walled gardens.
This is already happening. All the big players own their hardware and rent it out to everyone else, while trying to convince everyone it's not worth owning your own hardware at the same time.
These companies have already begun closing off server platforms by developing custom hardware and software systems that cannot be bought for any price, only rented. These systems represent a new breed of technology with unbreakable vendor lock in.
Theses same companies compete with each other and countless other companies across the space. Take for example a start-up that wants to run their own app store. Google, Amazon, and Microsoft all run app stores. Where will this company go for cloud services? Their only big name options are to host their software on the hardware of a direct competitor. Their host has full visibility on how their system works, and control over the pricing and reliability of their machines.
It's laughable to think their "cloud partner" will give them any chance to compete if they enter the same market.
We've seen UEFI BIOS and un-unlockable mobiles enter the market in droves the last few years. A lot of new PC's can't run anything except windows. A lot of new phones can only run the carrier's version of android. We have all these general purpose CPUs that can no longer run general purpose programs because "security", and a lot of lobbyist pushing to make it actually illegal to run your own software on these with "anti tampering" laws, again for "security" . Soon the big guys (same companies, MS and Google) will make it impossible to run your own software on any reasonably inexpensive devices and the walled market will be complete.
Mark my words, I've never seen an industry with a couple big players where growth and innovation doesn't eventually turn into collusion, higher prices, and market stagnation. Once MS, Google and Amazon have their slice of the pie and they've killed off everyone else, we will see the death of general purpose computers and mobile devices. Everything you buy will be "android computer" "windows computer" and "apple computer". Anything general purpose will be massively more expensive because individual companies can't get the kind of volume discount of the giant behemoths that increasingly control large swaths of the world's computing power. We've already seen the endgame, with Amazon trialing an "on premesis" version of their compute platform which is basically a super locked down server that you can't buy, only rent endlessly. The future of on premesis will be a cloud in a black box if these companies have anything to do with it. Why? Because once they've got you locked in it makes no sense to sell you anything for keeps. Why keep improving their product so you buy the new version when they can just make it incompatible with everything else and force you to rent it forever, for whatever price they feel like charging?
One day running your own servers will be like running your own ISP . Massively impractical because the free market has been manipulated to the point that it effectively no longer exists
> One day running your own servers will be like running your own ISP . Massively impractical because the free market has been manipulated to the point that it effectively no longer exists
What? People use cloud computing because it already is massively impractical to run your own servers. Hardware is hard to run and scale on your own and experiences economies of scale. This principle is seen everywhere and can hardly be viewed as something controversial. Walmart for instance can sell things at a really low price because of the sheer volume of their sales. Similarly, data centers also experience economies of scale.
As someone who cares about offering the best possible, reliable user experience, cloud computing is absolutely the next logical step from bare metal on-prem servers. When your system experiences load outside the constraints of what it can handle, a properly designed app that has independently scaling microservices horizontally scales.
Even if you had the state of the art microservice architecture running on a kubernetes cluster on your own hardware, you still wouldn't be able to source disk/CPU fast enough if your service happens to experience loads beyond what you provisioned.
And there is the rub, buying your own hardware costs money, and no one wants to buy hardware they may not ever use. Another advantage of cloud computing.
You are seeing the peak of free market right now, because of cloud computing, which enables people with little upfront cash to invest to form real internet businesses and scale massively.
You think a game like Pokemon Go can exists and do the release they did without cloud computing?
"Even if you had the state of the art microservice architecture running on a kubernetes cluster on your own hardware, you still wouldn't be able to source disk/CPU fast enough if your service happens to experience loads beyond what you provisioned." That basically means you never planned. As everyone moves to cloud what makes you think AWS, Azure wont have same issue. If entire region is down do you think other regions can handle the load. If you think so you're kidding yourself. Unless you have business where you dont know your peak number then cloud does not matter.
You can plan all you'd like, failures happen not necessarily due to poor planning but because in real life, shit happens. Pokemon Go for instance experienced like 50x the amount of traffic they planned for.
Secondly, software companies like Microsoft, Google and IBM might know a thing or two about running data centers. Due to economies of scale, these companies are inherently in a better position to supply hardware at scale.
> If entire region is down do you think other regions can handle the load. If you think so you're kidding yourself
Netflix routinely does just this to test the resilience of their systems. They pick a random AWS region, and they evacuate it. All the traffic is proxied to the other regions and eventually via DNS the traffic is routed entirely to the surviving regions. No interruption of service is experienced by the users.
Here's a visualization of Netflix simulating a failure on the US-east-1 region and failing over to US-west-1/US-west-2
The top right node is the one that fails. As the error rate climbs, traffic starts getting proxied over to the surviving nodes, until a DNS switch redirects all traffic to the surviving nodes. Netflix does this monthly, in production. They also run https://github.com/Netflix/SimianArmy on production.
The cloud enables fault tolerance, resiliency and graceful degradation.
I think you missed the point, Netflix evacuating a region is not the same thing as that region failing. If the whole region goes down, their (AWS's) total capacity just took a major hit and unless they have obscenely over-provisioned (they haven't), shit is going to hit the fan when people start spinning up stuff in the remaining regions to make up for the loss.
Have you run your own servers in a colo? I've done it myself.
One person, with maybe 3 hours a week of time investment after a few weeks of setup and hardware purchase. Using containers I can move between the cloud and my own servers seamlessly, and long as I never bite the golden apple and use any of the cloud's walled garden "services" like S3. If I need more power I can spin up some temporary servers at any cloud provider in a few hours. For me the cloud is a nice thing because I don't use too much of it. If AWS disappeared tomorrow it would be a mild inconvenience, not devestating like it would be to many newer unicorns.
Go ahead and try to use the cloud you're paying for as a CDN or DDoS sheild, or anything amounting to a bastion of free speech. You'll quickly find out that your cloud provider doesn't like you to use all the bandwidth and CPU you pay for, and they don't like running your servers when they disagree with your views. They quietly overprovision everything pulling the same crap as consumer ISPs where they sell you a 100mbps line and punish you if you use more than 10 of that on average. That's the main reason the cloud is so cheap.
Hardware is cheap, colo's are cheap, software is largely easy to manage. The economy of scale they enjoy is from vendor lock-in and overprovisioning more than anything else.
Is it really that hard to double the amount of servers you own every few weeks? No! If you're using containers or managed KVM you can mirror nodes basically for free over the network as soon as the Ethernet is plugged in. Your time amounts to what it takes to put the thing in a rack, plug in the Ethernet, and hit the "on" button. Everybody in SV land thinks you have to use cloud to "scale massively" but they forget that all of today's technology behemoths were built years ago when the cloud didn't exist. Oh yeah, they all still run all of their own hardware too and have from the early days. Using their model as a template, you should own every single server you use and start selling your excess capacity once you get big enough.
Did you ever read about how Netflix tried to run their own hardware but can't because they have so much data in AWS that it would basically bankrupt them to extract it? Look at how these cost models work. Usually inbound bandwidth is extremely cheap or free but outbound is massively more expensive than a dedicated line at a datacenter, 50-100 times the cost if you're saturating that line 24/7. The removal fees from a managed store like S3 or glacier are even more ludicrous. The cloud is like crack and as soon as you start using it more than a few times a year you will get locked in and unable to leave without spending massive $$$. Usually companies figure out this shell game once they're large enough, but by then it's far too late to do anything about it.
Why are they marketing these things so heavily to startups? Because lock in is how they make their money. They make little or nothing on pure compute power, but since you don't have low level hardware access they can charge whatever the hell they want for things like extra IP's, DDoS protection, DC to DC peering, load balancing, auto scaling. You give massive discounts to new players using these systems and inevitably some of these will become the next Uber or Netflix. Then you are free to charge whatever exhoribitant rates you please once it's so impractical to move that it would require a major redesign of the business.
I see it a lot like franchising. By building on Amazon's cloud services you become "Uber company brought to you by Amazon". Like franchising, your upside is limited because any owner with a significant share of total franchises will begin to put pressure on the service owner itself.
To be honest, you sound like conspiracy nut hell bent on hating the Cloud. Maybe you should try taking a deep breath, and try to open up to the possibility that the Cloud is actually a good thing, and Cloud providers aren't the illuminati trying to "lock you in". Well maybe they are. Of course every cloud provider wants you to use their services.
You can architect your system in a way that it'll run on any cloud provider. All the major Cloud Providers support kube for orchestration.
To be honest I don't think you know what you're talking about. You should refrain from making uninformed opinions on hacker news, especially on a throwaway.
Did you ever read about how Netflix tried to run their own hardware but can't because they have so much data in AWS that it would basically bankrupt them to extract it?
Where did you read this? You can have Amazon send you a truck full of hard drives. I doubt it costs more than Netflix can afford.
Nevermind, I misremembered the story I read about them. They moved the main site to AWS with the huge omission of their movie streaming system. Their own Open Connect servers are far cheaper to use for this becuase of massive AWS outbound data costs.
Also, the truck is for data in, not data out. Getting data out of AWS is far more expensive than putting it in. That's the lock in.
You did not ever own your own globally consistent, massively scalable, replicated database. The fact that you can now rent one by the hour is strictly an improvement for you, if you need that kind of thing.
Cassandra also does that without requiring the "magic" of a system you can only get from a single vendor and never buy. At the same time these walled gardens have come up free software has grown to fill the gaps
Spanner is unique in a lot of ways, but it still trades off consistency for speed.
The most unique thing about spanner is the use of globally synchronized clock timestamps to guarantee "comes before" consistency without the need to actually synchronize everything.
There is nothing stopping startups and open source developers from building the same thing in a few years. The missing ingredient is highly stable GPS and local time sources which will hopefully be available on cloud instances sometime soon. This is a new piece of hardware so it will be interesting to see if cloud providers make one available or use the opportunity to sell their own branded "service" version you can't buy. Unfortunately I think we'll see the latter far before the former, it it ever even exists. Without a highly stable timesource doing what spanner does will be completely impossible.
Yes spanner is special right now but that's even more reason to not go near it. Google has a complete monopoly on it, the strongest vendor lock in you can possibly have
Only "new" in the sense that it is currently not commonly offered, the devices themselves have been available for ages. (If you are a large enough customer you apparently can get at least some colo-facilities to provide you with the roof-access and cabling needed for the antennas). If cloud providers make precise time available I don't see much potential for locking you in with their specific way of providing it, as long as it ends up as precise system time in some way.
I'm saying I doubt they will ever offer it precisely because it will conflict with their paid offerings. The fact that it takes its hardware is a great excuse to not give your customers the option.
I know GPS time sources have been available forever but a fault tolent database needs a backup. The US GPS is incredibly reliable but there have been multiple issues with both Glonass and Galilio.
It sounds like Google has an additional time source making this possible, probably a highly miniaturized atomic clock, possibly on a single chip. There's no way they're running on GPS alone
Yes, they clearly say that they use atomic clocks in addition, but that's commercially available as well. Atomic clock for frequency stability short- to mid-term, GPS to keep it synced to global time. E.g. in many cases, mobile-phone base stations contain just such a setup, and the data-center versions should fit in a few HE.
A system build on top of it? Possibly, but thats the trade-off if you don't want to pay for/be lock-in to somebody else running it. For just the timing stuff: not really. Of course it adds complexity, but these things are established and should be quite stable.
The absolute level of computation available isn't changing at the consumer level. What's happening for the next decade is the destruction of businesses hosting their own IT infrastructure and moving it to a couple of core centers.
So, the computational "Gini index" is increasing, but no one is being thrown into computational poverty.
>What's happening for the next decade is the destruction of businesses hosting their own IT infrastructure and moving it to a couple of core centers.
Yes, and this will be disadvantageous over the long run for people that want to run things themselves. Ultimately companies like AMD/Intel go where the big money is at. As things centralize further and further, there will only be 3 customers they care about in the server market.
> The absolute level of computation available isn't changing at the consumer level.
Maybe not, but consumers increasingly use centralized computation resources. I would guess that by now most applications used by consumers run in their web browser, such as Facebook.
The parent comment doesn't seem to specify "consumer level" and the loss of businesses having their own infrastructure is equally troubling. Everyone is putting a lot of eggs in a very small number of baskets.
I would disagree about the character of the situation. This isn't about people putting eggs in a few baskets, it's that it's more efficient to have centralized chicken coops instead of every family in the world owning their own chickens.
Now, you could play with that analogy further and see some issues as well, but I don't think the issue here is centralized failure; all these data centers/"clouds" are at least good. The Cloud is about businesses focusing on core business and not supporting functions.
[Disclosure, I work on the Google Cloud team, I'm biased]
>focusing on core business and not supporting functions.
Having a devops team with the necessary expertise in Google Cloud or AWS is still a supporting function. You've just traded one skill (managing physical servers) for another (managing proprietary virtual resources).
But hopefully a smaller team, and one that keeps diminishing in size over the years if the trend continues. At least for the same level of service (in availability, security, etc.).
Let's look at your metaphor. It's more efficient for the raising of a large overall number of chickens. It's less efficient when I need fast access to a single egg.
Hence we get caching. There's the farms, then the inbound warehouses, then the distribution centers, then the grocer, then our refrigerators by the dozen or dozen and a half. When your local cache is empty of eggs, though, it requires a trip back out to the grocer to get an egg even if you need nothing else that trip. Then you generally have to buy at least half a dozen if not a dozen or more eggs just to get the one you wanted.
If I have my own couple of hens, I can go out into the yard and get an egg. If that's the whole of my fetch list, it's much more efficient for this single egg to have the hens laying right out back.
This whole few baskets metaphor breaks down from another point of view, though, when we consider that by the very nature of using a globally distributed hosted service we're actually eliminating a single basket problem. Yes, there's not much choice among just Google, Amazon, and Microsoft. (That they are the only options is a bit of a strawman, but lets grant this one legs.) However, putting just your own employees in charge of all your infrastructure in just your own datacenter(s) in just PostgreSQL or just MySQL is another single-basket problem. Spreading it out so that someone else gets to manage the hardware and the service and replicating your data widely within that service is from that point of view more baskets. You get more datacenter baskets, more employee baskets, and more software baskets. Using standard SQL means you can move among compliant software later, too, so you're not as tied to those baskets.
Now, back to your coop analogy. What's stopping me from having my application talk to Cloud Spanner and a local database proxy (or a work queue that sits between the app and the DB or whichever) so I can use Google's reliability for transactions and my local cached replicant for query speed when I'm querying older data? Why can't I keep a few eggs around?
Also, why would I be scared of Google or Amazon "having my data"? Why would I put sensitive data into my own database in plaintext and then replicate it among multiple datacenters that way?
> it's that it's more efficient to have centralized chicken coops instead of every family in the world owning their own chickens.
Only if the owner of the chicken-coop has everyone else's best interests in mind. Protip: They don't.
The Cloud isn't about efficiency, it's about data control. Getting people's systems and data into Google/AWS/etc helps with data mining, vendor lock-in, etc. Often times that can be efficient, but also it often isn't.
That's like being sad about the emergence of banks, because everybody's money is being kept in a small number of vaults instead of under each one's mattress.
A good point, but there is an up and downside to everything. The centralization of IT does impact civil liberties and possibly innovation - unlike FOSS and other local systems, aspiring hackers can't tinker with Facebook code and see how it works.
> Aspiring hackers couldn't tinker with MS Word 2000 code either.
They could tinker with the binaries, something many did with game binaries. But your point is well taken; open source is also very valuable to innovation.
Web apps were also very useful for learning JS and browser APIs, before everybody started minimizing and obfuscating it. I learned how to write a rich-text editor just by looking at the code of Hotmail's email editor.
Fair enough, but think of that free and open stack: (layer 1), Ethernet, IP, TCP/UDP, HTTP/SMTP/DNS/etc, HTML/JavaScript. How many cut their teeth on those?
The apps on top, Facebook, Snapchat, etc., are not so open and much of what they do is out of reach from the user.
Also, I meant to add above: People could tinker with data files (e.g., Word docs), configurations, etc. The whole system was local and accessible. You could write local code, such as VB or for Windows, that integrated with those systems.
That strategy resulted in the Great Depression and later 2008 crises. Damage was so high that country had to be rescued by the federal government. So, banking is a decent example of how such consolidation into private hands can go wrong. Now we just apply that to IT services and data.
That's a ridiculous argument: Banks started being a thing at the end of the Middle Ages. The Great Depression and the Great Recession were not caused by banks emerging, nor by people putting their savings in them.
Not emerging. Just being themselves with all their schemes and an economy dependent on them. A distrust of banks and their schemes at a national level might have reduced their ability to cause those problems. On top of the smaller stuff such as them delaying deposits or withdrawing stuff for bogus reasons.
Putting your savings under the mattress instead of in a bank account wouldn't have prevented the Great Recession. It was caused risky mortgages (debts, not savings) being sold as low risk from bank to bank, and then defaulting.
Putting your savings under the mattress instead of in a bank account wouldn't have prevented the Great Depression either.
The only thing it would have accomplished is making your savings easier to steal.
Storing gold or other valuables instead of Gederal Reserve notes for sale or bartering wouldn't have helped during Great Depression? I havent heard the angle that there was nothing to barter with on top if worthless dollars.
We've already been through it. People eventually abandoned mainframes for everything they could. Many of the current customers are interested in better solutions but just stuck due to lock-in of piles of COBOL, etc.
> The centralization of computation is likely not a good thing in the long run.
I agree. It only makes sense if you need special data for statistics, AI training, etc.
In all other cases the classic way of programming on pc and notebook is smarter. If you do everything in the cloud, what if you lose Internet connection? I had that experience several times over the last years.
* Computers are much more stable than they used to be
* Much of the world lives in places with less stable connections
* The most expensive spec in an Internet connection is availability. You can get a low-end 15 Mbps connection with no availability guarantee for $40/month; a T1 is one-tenth the speed and costs 10 times as much (all numbers are rough estimates).
Aurora is very cool but won't help you much after you vertically scale your master and still need more write capacity. With Cloud Spanner you get horizontal write scalability out of the box. Critical difference.
Per their pricing page[1] it looks like the largest instance available is a "db.r3.8xlarge", which is a special naming of the "r3.8xlarge" instance type[2] which is 32 cpus and 244gb of memory.
That's a hell of a lot of capacity to exhaust, especially if you're using read replicas to reduce it to only/mostly write workloads. Obviously it's possible to use more than this, but the "sheer scale" argument is a bit of a flat one.
You can disagree on that if you'd like, but note that I explicitly acknowledged the possibility of exceeding these limits. In my opinion, for most cases/workloads, it's highly unlikely that you will and designing for that from the outset is a waste of time and resources.
Yes, Aurora has a single write master, though it does have automatic write failover -- i.e. if the Aurora primary dies, one of your read replicas is promoted to the primary and reads/writes are directed to the new instance. That does constrain your primary's capabilities to the largest instance size (currently a db.r3.8xlarge).
I don't have a good idea what the upper limit is for an Aurora database setup.
AWS uses heartbeats for detecting liveliness. If x heartbeats fail the failover procedure is started. Generally 10s - 5minutes. In practice (for me) the failover has been less than 15s.
Aurora's read replicas share the underlying storage that the primary uses, so AWS claims that there's no data loss on failover. They also claim -- and I've never heard anyone say they were wrong -- that Aurora failovers take less than a minute. So the pain should be limited to under a minute of lost writes, which most applications can handle (with an error). It can still be painful depending on the application.
That's vague. AWS also powers huge websites and Amazon is recommending Aurora as the "default choice" for most workloads.[1] There are certainly significant architectural differences but I would say we can definitely make a direct practical comparison.
And their needs are reasonably complex. They use machine learning and big data analytics to generate the list of videos that you should be watching. In order for those to work they need to capture a whole raft of end user metrics e.g. at what point you paused video X.
While Aurora doesn't provide true horizontal scalability, the same-node scalability seems so strong it might allow many companies to stay single-node for quite a while.
It is not close to equivalent. But I do want to get a better feel for if Google really has figured how to do basically the impossible. I want to see if this truly scales horizontally but of it does then competitors better hope for a much more detailed paper :)
It's equivalent, with different (unknown) constraints. Aurora is specifically for scaling workloads in the same way. You can say it's horizontal (machine) over vertical (resource) but it's all a matter of accounting.
The big nono is the Spanner pricepoint. I will stick with Aurora for scaling based on traffic I use, over pricey timeslices.
You would have to have quite a load to justify the switch from cheaper de jour solutions right now (AWS). Relying on the few that do, is a risk.
It seems like Oracle could have a play here, working on adapting cloud infrastructure tools for managing on-premises data centers. That keeps them in play for customers who can't put their data in the cloud and those that haven't because they're already Oracle customers.
This pendulum swings. We're pretty near the apex now. A little work on ergonomics and these tools could be turn-key, and back we go to decentralized hardware.
> the only growth market for Oracle right now is their hosted database service
This is not true. Oracle is far more than a database company nowadays in the same way that Microsoft is more than Windows. Oracle has been acquiring high-growth startups at a significant rate.
Which has their 'cloud services' doubling their contribution to revenue year over year and licenses losing 50% of their contribution to revenue year over year.
I didn't downvote you. It is important to note though that the Spanner project isn't related to MySQL and there is some discussion of that in a the stories around Spanner. It would nominally compete directly with Oracle's flagship database product.
it's actually hard to beat MySQL for a lot of things. i was skeptical about this when I joined google, but as an SRE on the MySQL team around this time, I gained a lot of respect for it.
That is an interesting way to look at it; I have wrestled with MDB[1] while working at Google, it was a ginormous MySQL database (possibly one of the world's largest). And I would characterize Spanner's relationship this way, "If you think you are actually going to build an ACID database that scales, then make sure you can support the MySQL api that MDB uses and we'll see just how well it scales."
I don't know if anyone put it to them that way but as Spanner was just getting started when I left I know that one of its success criteria was to be able to be a scalable replacement for MDB. Given the white paper and other papers on their results, I'm sure it managed that requirement.
[1] MDB, Machine Data Base, used throughout the org but especially in Platforms and SRE to keep track of machines and their parts.
And for those wondering, this is why Oracle wants billions of dollars from Google for "Java Copyright Infringement" because the only growth market for Oracle right now is their hosted database service, and whoops Google has a better one now.
It will be interesting if Amazon and Microsoft choose to compete with Google on this service. If we get to the point where you have databases, compute, storage, and connectivity services from those three at equal scale, well that would be a lot of choice for the developers!