To all the people saying that this is nothing new: to me the key point here is that the author of this article, Bert Hubert, isn't your average activist / purist linux hacker. He's at least somewhat influential in government circles, in that he has held various government IT consulting positions and is listened to by lots of government IT workers. He's one of the few people I know of who deeply understands how tech works, and also deeply understands how government works (at least the Dutch government). He's also a frequent guest in radio and TV shows and the likes.
I'm hoping that this article acts as a catalyst for the Dutch government, and other EU governments, to move everything away from American clouds.
I certainly don't blame the activists for governments refusing to listen, but this threat was clear at least 15 years ago and I would expect someone as knowledgeable as Bert Hubert to have perceived it at the time.
Is the idea that they're more ready to listen and take action because of recent executive changes in the US, even though the cost of doing so has gone up by 100-1000x and the possibility of a joint retaliation from US tech giants and the government working in concert is now much higher?
I hope you're right, but one of the rough dislocations of the present moment is the disconnect between how europeans conceive of their sovereignty and the reality of their economic, military, and cultural fragility in their relationship with the US and US companies.
No amount of grandstanding rhetoric and appeals to "courage" changes that if there are any serious economic consequences (caused by US/corporate coercion or otherwise), the government would likely fall and be replaced by someone more amenable to the status quo. What feels like a small price to pay for someone focused on security long-term may be an unacceptable price for someone focused on short-term outcomes in their political fortunes.
> Is the idea that they're more ready to listen and take action because of recent executive changes in the US, even though the cost of doing so has gone up by 100-1000x and the possibility of a joint retaliation from US tech giants and the government working in concert is now much higher?
I believe so, yes. I don't think Americans realize how profoundly the last few weeks have affected European political thought. It'll take a while before you see concrete changes. Europe is like a mammoth tanker, slow to change direction, but practically unstoppable. I believe that it's more likely now than ever before for European governments and businesses to sever their dependency on American technology. Lots of comments in this thread explain how hard this is, how big the feature gap between, say, AWS and OVH is, but as a European entrepreneur I gotta say, this looks a lot more like an opportunity than a problem to me.
Preferably all in the same place and at least somewhat integrated with each other. I'm not spelling out logging, auditing, IaC and other supplementary features but rather core functionality.
That seems to me like a minimal set of services a cloud provider must offer so that clients would work on "service assembly" instead of "building from scratch" or "integrating integration-hostile products".
OVH and Open Telekom Cloud have the vast majority of the features that you request and are provider EU based and owned.
IMHO:
- Configuration management, not needed, i vastly prefer Ansible. If you mean IAC: terraform is the best.
- Domain and cert registration. Domain use 3dparty
- Email /SMS: use third party provider
I think the biggest open question right now is asking if there is such a thing as "Europe" and if it's capable of responding in a unified way in a relevant timeframe.
Ie, are you a Europe-wide entrepreneur working to move the whole unstoppable tanker away from american clouds, or do you just have contracts with EU entities in Brussels and perhaps a few EU members like Germany or Denmark?
Do those contracts really help you navigate the digital contracting systems of Italy, Spain, Greece, and Croatia? And is your timetable for growth going to line up with their elections that could result in contract negotiations stalling or even existing agreements being frozen?
The concerns expressed seem a bit silly, unless the various Euro systems didn't take the very basic approach of using open standards and avoiding lock-in. Oh, and they should be backing up their data somewhere besides "in the cloud".
If those very basic precautions had been taken, migrating to a Euro cloud, or a private environment (open cloud stack) would be trivial.
If not, a lot of people should be fired...but granted, there are a lot of stupid people out there...
All that said, I'd say the concerns around this are vastly overblown.
“the very basic step” is a lot less basic than you imply.
There’s a million little proprietary APIs and the temptation to glue one to another, especially circumstances like AWS where they use lambdas for basic functionality that should have been just provided by the cloud provider itself.
Why do you say that the cost of throwing out American tech giants has gone up by 100-1000x compared to 15 years ago?
I mean before everything became cloud/SaaS, American software companies were still essential to most European business and governmental operations. It was just on more traditional server/desktop systems?
I hope so too, but move where? Does Scaleway or UpCloud or any other EU cloud provider have comparable offerings? Sure, if everything you have is running on containers or VMs, the stuff is easy to port to Hetzner et al., but what to do with the cloud specific apps (Azure functions etc.)? Rebuilding those for other platforms is probably a no-go unless the Union pours billions into supporting this.
Though I've cursed it for years, I'm increasingly glad our org's cloud migration has been so slow that we've only now rolled out the first apps. Pretty much everything we've build can be run anywhere we want, so if it's time to drop the ball and go back to onprem, we've not wasted anything but time on setting up the base
Coming from IT land, the answer is simple: you don't use them in the first place, and you grit-and-bear the replacement cost if and when the time comes. This is a negative on my research notes, slide decks, and papers when it comes to evaluating various cloud platforms for our workloads, and yet it's also the number one reason we're forced into a specific provider (some leader loves their proprietary tooling, and forces us to use it).
Look, I'm not saying these proprietary tools are bad, per se, just that they have a steeper cost than initially presented to the consumer in terms of architecture complexity and inevitable migration. The very first question you should be asking before consuming niche or proprietary products from vendors is, "Can I do this in a standard way that's more portable?" For stuff like Azure Functions, the answer is emphatically yes - but it comes at the cost of managing additional infrastructure, which is often the main reason companies want to use those tools in the first place (a misguided notion about throwing out infrastructure to save money).
As for the solved problem of compute (VMs and Containers), well, literally any cloud provider should have that ready to go. The question is whether or not your org is willing to retain the talent needed to build and support your clouds internally, or if they'd rather pay higher outsourcing costs with vendor lock-in instead.
One thing that isn't so simple, even if you stuck to VMs or docker containers, is the networking.
The networking stack in Azure or AWS are so different that they require a different mindset to work, especially securely. If your networking needs are simple you are very lucky.
You can have a very complex networking infrastructure with very simple proxies and network segmentation. What specific feature do you have in mind. Load balancing and resource synching?
Often there are proprietary solution to proprietary problems you would otherwise not have in the first place.
I used AWS for a long time but I am back to hosting myself. What arcane network requirement would that entail? I don't think there are benefits even for government scale problems.
Anything involving private links to other organisations, cloudflare or API management to multiple endpoints scattered over on prem and hosted. I would hope you could avoid most of the pitfalls by avoiding the proprietary solutions but sometimes there is no feature parity between host services and you might be stuck.
The private links in Azure are particularly specific.
I mean, networking in general is difficult and complex. While most of my work is in the "systems" realm of IT, my formal education was primarily in the networking side of things with systems as an "also-ran". The complexities of public clouds like AWS and Azure isn't so much new complexity in networking, but a deliberate change in vocabulary and implementation of existing concepts to justify the higher salaries of those certified on a given cloud. After all, if it was the same process to implement, say, HAProxy on AWS as it was on-prem, then the illusion of "new" is shattered and customers might realize they're just paying more money for their same infrastructure, but with shiny new terms and a more consistent API/CLI experience.
After you translate the vocabulary, the process is pretty similar until you get to security items, like ACLs or packet-inspection firewalls. You're still setting up VLANs in the form of subnets, routers in the form of transit gateways, sites in the form of VPCs, inter-site connectivity through peering connections, you get the idea.
If there's one thing I've learned in my IT career, it's that most "new" ideas are just rebrands of existing concepts, and that the real expertise comes from being able to translate marketing-speak into concrete, interchangeable fundamentals. Public Cloud is, largely but not entirely, no different in this regard.
For hosting their government's own specific computing needs, and assuming a respectable GDP, they can build their own datacenters (pretty trivial) and hire contractors to build cloud computing environments (more challenging).
Open source cloud isn't too hard. There's OSS for about 80% of software needed for a cloud computing service provider, and you fill in the rest with proprietary and custom stuff. There's already several providers (one in the US, several in the EU/other countries) that offer "public cloud" using OpenStack. They literally give you, the customer, your own OpenStack cluster, and bill you for what you use. It's insanely easy and powerful. Yet everybody still uses the more popular providers (DO, Hetzner, Scaleway, etc), despite the fact that they all have proprietary interfaces, without anything close to feature parity with OpenStack. I guess people really like vendor lock-in and lack of features.
The hardware is more challenging to source; the chips all come from Taiwan or China, and the US and China make most of the good hardware.
For private business in their country, they might offer grants and tax incentives to EU companies to build out more local cloud hosting services. But since it's the EU I'm sure it's massively more complicated than that.
Well as someone who's actually used them as a customer, OpenStack hosting providers do give more functionality than DO, Hetzner, etc, plus they have an open API. None of them compete with the "big 4" public clouds (everyone forgets Oracle is still around...) but if all you want is IaaS then you don't need them.
I know OpenStack is a tire fire to maintain, I've worked with it for large-scale on-prem data solutions. But if a company wants to kill themselves to maintain it for me, I'm happy to pay for the privilege.
Being a customer of an OpenStack provider isn't exactly a picnic. I could show you a long stack of support tickets from all of the things constantly going wrong.
Given a long list of support tickets vs Effectively relying on responsible stewardship by Musk and the King of America, I suspect there will be many a developer who find the long list of ticketed issues to be the less hard problem to tackle.
There are sadly a lot of "sky is falling" type people out there yes. This is why we have to determine a threat model before we implement a security response...
But that's also the point of my other comment in the thread -- a French company builds basically all of the physical infrastructure that datacenters run on. This attitude can be applied both ways.
> OpenStack is a cluster of poorly-interoperating, poorly-documented products -- The customer experience is fucking terrible.
I assume you were unfortunately a victim of Mirantis/Fuel/Puppet/Mcollective... or one of the 'converged' solutions.
While I wouldn't call OpenStack "fun" Especially in the Essex to Icehouse era, where vendors seriously impacted the code stability...It is just a well documented collection of separate components that interact using REST api's and RPC like calls over a message bus.
Nvidia, Cern, JPL, and lots of smaller companies that need private clouds and have the expertise are still running OpenStack.
For me the main value is the ability to have portability between public and private.
If you just use the ansible playbooks included in every OS repo, it is pretty easy to roll your own deployments that are quite easy to maintain if and only if your company is mature enough to follow that model and isn't subject to the soicotechnical issues that plague containers too.
While the workflow changes, the hard parts of OS and k8s, including networking, monitoring, etc,.. are exactly the same.
As a random example of what always screws this up let me point at kubespray, which is not unique at all.
That is because, like many projects, they didn't respect the natural boundaries of the node components, and they are now paying the price for that debt.
k8s and OS from an infrastructure point of view are equal in complexity. It isn't instantiating a container with CRI foo, or libvirt command bar that is the hard part.
It is the distributed computing, virtual networking , resource allocation, federation, API's etc... that is hard.
Note, if you think that the "OS is dead" for all needs, especially in the telco space, you may want to dig into what containers actually are. They are just namespaces running on an OS, and it will still be horses for corses as to what is appropriate.
Especially if you are using the easy ways of instantiating hardware for k8s, almost all of them are highly insecure by default and you are going to have to dig into the same style of systems with similar components or you will have a leak of data at some point.
I wish there was something better than OS, but if you use a dev mindset and not a glass house IT mindset it is a very useful tool that may be the least worst option for you for some needs.
No and No. It's not about the complexity of it or being any worse/better than K8s.
It's about the endless bugs and regressions and laundry list of stupid problems caused by inadequate processes by OpenStack developers.
For example, let's say you're running Cinder v3. Cinder 3.59. You want to get the volumes that you have attached to an instance, so you curl the API:
/cinder/v3/<instance id>/attachments. You get a 404.
You get a 404 because you didn't pass this header: "OpenStack-API-Version: volume 3.27". Because Cinder defaults to Cinder 3.01 behavior even when you're running 3.59. Attachments were only added in 3.27. So even though you're trying to curl a route that wouldn't even exist in 3.01 and you're running a version clearly later than 3.27, the API responds as if it's Cinder 3.01 unless you specifically tell it to do otherwise.
And this is just one of the laundry list of stupid situations that I can remember off the top of my head.
When the thing isn't otherwise failing all the time.
That isn't a bug, that is correct behavior under their contract model (which I will admit isn't my favorite).
It is common for message based systems for the target system to own the contract, and they have both the / and /v3/ endpoints that you can grab the version information from.
While I personally prefer the URL method, when versioning through custom headers, if you bump the API without that custom header, you will break way more than returning correct behavior for the minimum supported version, enforcing backward compatibility for API's is generally considered a best practice.
Note:
> If the OpenStack-API-Version header is not provided, act as if the minimum supported version was specified.
Rackspace where the archetypal provider and they sucked. The irony is I've only actually ever really seen internal open stack instances, providers for whatever reason seem to prefer to roll thier own
- The one I have experience using is Genesis Hosting out of Chicago. Their website looks like it's from 1997, because it is from 1997... But they provide a nice OpenStack solution that works well.
- I haven't used Vexxhost, they seem to provide something OpenStack-related, but their website is all marketing bullshit, so I have no idea what you actually get.
- RamNode seems to provide access to the OpenStack API.
In Europe:
- OVH Public Cloud is still short on details, but based on some verbiage buried in the marketing BS, it looks like you do get an OpenStack interface.
- Open Telekom Cloud by T-Mobile seems to give you an OpenStack interface.
- Acville Cloud is based in Romania.
- Cyso Cloud (formerly Fuga Cloud) is based in the Netherlands.
- IntoVPS seems to provide its services on OpenStack, but no idea if the API is open. They build a custom OpenStack console called Fleio.
Scaleway at least is genuinely not a bad alternative for this kind of thing already today - they do have plenty of managed services like serverless functions, object storage, queues, etc, in addition to the simple VMs and container hosting.
Scaleway (and I say this with very deep sadness) is pretty bad in terms of reliability right now, there are at least a couple big outages every year over the course of last few years that I've been using them.
Admittedly they have a new CTO who according to our support agent is very focused on improving that, so here's hoping, because otherwise their tech offering is very convenient.
OpenFaaS is one option for your functions. Knative is pretty good as well for the bulk of your applications without exposing developers to kubernetes directly. Between that and Crossplane I think you have all the pieces needed to move away to a self hosted solution where you are managing either metal or VMs through a hosting provider.
I’m not sure what this looks like outside of the US, but colocation providers offer racks of machines, or to host your machines, while providing access to cheap bandwidth and peering capabilities. It’s absolutely possible to move away from the major cloud providers. However, it will require a degree of investment within your organization to support these deployments no matter which you choose, which could be a new investment compared to using AWS, GCP or Azure.
You need teams of people, the good news is that they're available here. It's not hard as such just requires time and money (quite a lot).
It's not just kubernetes and openFaaS, what about that thing that's a virtual appliance and requires a VM, now you need KVM. Network and firewalls? Storage as in fully replicated cannot ever lose a byte or have it unavailable storage? Object as well as block. Databases, point in time restores/backups/automated maintenance for postgres and then you've probably got a mssql server for that one app, and mysql for that other app.
It becomes just a fairly massive task back in the real world.
OpenStack out of the box does KVM, network, firewalls, NVFs, orchestration (via native heat or terraform), and with the Magnum component can launch k8s, Mesos, or Swarm largely automagically. Storage is typically via ceph (which does block, object [supports Swift/S3 protocols] and filesystem) and supports snapshots and is fully replicated. Sadly the managed database service didn't make it far, but with Heat or Terraform it's pretty easy to spin up a VM holding your DBs. The native FaaS service, Qinling got deprecated a while back. Secrets management via the barbican component. Web interface via the horizon component.
I'm not too familiar with the whole range of AWS offerings, but I really think aside for DBaaS and FaaS OpenStack can cover pretty much everything someone would need, especially combined with Ceph for storage.
Yes, I'm aware. It doesn't reduce or negate the need for a team responsible for running storage and understanding how it works, then a team owning databases (probably with some development resources too) and so on.
It actually takes work to setup and run we are not just installing some packages and then pretending you can scrap aws.
AWS EBS volumes (except io2) have an annual failure rate of 0.2%, so if you have 1000 running statistically you will loose 2.
For io2 it's 0.001%, but still not 0.
> People also fool themselves that special keys and “servers in the EU” will get you “a safe space” within the American cloud. It won’t.
The problem isn't sneaky backdoors, the problem is that the King of America can order Google to shut that thing down and Google will have no choice but to comply.
Well, the thing I was referring to isn't GCP regions with data residency requirements. It sounded like a clone of the entire stack installed on hardware owned by the customer government.
I guess the King of America could still shut down the ability to provide support updates.
Only if the systems operate in within their jurisdiction. Systems residing outside of their jurisdiction are not susceptible to the same policies and requests. Most cloud providers in international spaces provide secure government solutions that are designed around the regional policies.
That seems naive or not responsive to the comment. If the US government tells Google to shut down all international sites/servers, or it will cease to exist in the US, I don’t think “but the servers aren’t in the US” will really matter.
I also don’t think anyone can count on extra-judicial demands from the current executive branch.
Then the government of said country will just force the local company to separate from its us parent company. Don’t forget these regions/servers are usually owned by local subsidiaries.
Not really, the whole point of this type of cloud offering is that it doesn't phone home to Google / the US. Sure, it will be left to the partner to support all of it, but it can't be shut down from one day to the other.
If Google isn't able to shut it down or providing the infrastructure necessary to keep it running in some way, why pay them at all? Whatever path towards work that you say could happen to support it in the future could just happen now instead. If that's too expensive for the customer or the local partner to consider, I have to question what this setup is even helping hedge against at all, because the whole point of it seems like it should be for the customer to be able to put in whatever work they need to up front to be able to avoid being forced to deal with it on a timetable they don't have control about in the future.
It sounded like Google was providing all the software necessary to use a cloud system effectively, including IAM. And you could get all of the other GCP services like BigQuery or PubSub etc. I don't remember what it was called though.
So that seems to be the value add. Of course the software will eventually need updates...
In France we have https://www.s3ns.io/ which is a Google / Thales partnership, where Thales owns 90% of the company, handles the datacenters and Google provides the software and the updates without touching the servers themselves.
They are about to go live in a few months.
This is a good option IMHO, and we're about to migrate some of our workload (currently 100% on AWS) on it.
We use EKS, RDS on standard PG, SSM and S3. S3 is a standard now, SSM can be replaced by something else fairly easily, EKS and RDS are just managed open-source software. So it's mostly an added burden on the devops side.
What happens if Google is no longer allowed to provide software updates due to trade restrictions, sanctions or executive orders? Does Thales have a copy of the source code and the capability of keeping it up to date themselves?
> but what to do with the cloud specific apps (Azure functions etc.)?
Don't build them. Vendor lock-in is a real problem: even if there are no political issues, it's a business risk because they can charge you whatever they want.
Also, the cost of migrating off these things is usually overestimated. It's an HTTP request, for crying out loud.
Fully agree with you there - building cloud-only stuff has always seemed foolish to me. Even Azure Functions can be done as e.g. simple C# programs which would be trivial-ish to port ovee to VMs.
But my concern is for those that have built something as Azure/AWS only, who are now stuck with the bed they've made. Sure, there are lessons to be learned here, but if the volume of these is too high, then there will be pushback on any meaningful change since it will be too expensive
People who build vendor locked applications are making a short-sighted decision. Call me old-school, but vendor lock-in benefits developers more than businesses. Agree that they can learn new shuny things. A well-built application should run seamlessly on any Linux-based system without unnecessary dependencies on proprietary ecosystems.
The real moat is Azure AD and Exchange. The government IT teams I know can operate a fleet of VMs just fine, but they need email and identity management handled for them.
If that's the price tag, then I fear that "let it slide" will win the vote when governments decide what to do. Put another way, if the effort of making a change could be lowered, it's more likely that a change will be attempted
The concern isn't new. I've been involved in several UK government projects that considered moving to AWS.
Each time the discussion on moving to a US based provider was a big consideration, particularly the use of managed services that involve data was a hot topic. Part of the risk assessment was considering what the consequences might be if the US government became a bad actor. It was seen as high impact but extremely low probability. Starting to look like we got that part of the assessment wrong.
I think it will take time for the impetus to move to US clouds providers to slow and reverse but I'm not sure I'd be surprised if it does happen now.
by the course of looking for programming job, i have scanned hundreds of job-ads, incl. governmental. everybody-and-his-dog requires AWS/Azure/GCP knowledge as if it matters thaaaat much. These cloud-y things have become a mandatory buzzword, and i am not talking about sysadmin/devops.
In my last gig the system was kept cloud-agnostic, so moving between providers or on-prem be possible at any time. And i as CTO kept that good thing, although had to resist some pushes. But seems such cases are few - most places now dream of hyper mega-giga-scale and Lambdas and Big-queries.. while doodling few thousands of requests.
Lets see if there's any wind change.. vendor-lock is a real thing, with much deeper (architectural or life-cycle) consequences than usually perceived.
The dependence was established sooner by using external infrastructure. The premises that this infrastructure is not under your control is exactly what he now derides.
Someone knowledgeable should have seen this before, this is a core issue when setting up a strategy for digital systems. And this isn't an issue between "purists" and the rest, that is a false dichotomy. The decision was simply to outsource infrastructure to systems you have significantly less control over.
Might work for 15+ years or it might not. I doubt anything will be done now, investments are probably too high. But it is an issue with lacking foresight.
Between countries and the main task for intelligence agencies is industrial espionage. The Dutch government, like many others, decided that exposing themselves is no issue.
I disagree that it has become a problem only now, this is due to his narrow view on politics and a bit naive in my opinion.
I understand the sentiment, but as a Dutch person: The only thing I am more worried about than the government moving all our data to US clouds, is the government trying to do anything IT related themselves. They do not have the skill and have proven that over and over again in a long list of bungled projects.
I'd rather have my data end up with Google/Amazon/CIA than it ending up everywhere on the internet due to poorly configured DIY servers (and at twice the cost probably).
If there really is no organizations competent to run government application in the Netherlands, then that is even bigger reason to start doing more of that in the country. I mean, computers are not going away! The competence and infrastructure does not magically appear. It requires consistent investment over time. Not being able to maintain computer based infrastructure is like not being able to maintain water supply of a country. Completely unacceptable.
Heck these days maintaining water supply at city scale is difficult without computers and networking...
Besides: this is not a problem of competence or incompetence of either US companies or Dutch government. It is about the very real threat of US government no longer allowing US companies to provide us with services.
Or stealing and abusing all the data. It's like Russian gas except also the gas pipelines let Putin spy on every government agency and every household.
I've been interviewing candidates using questions targeted at getting them to talk about experience instead of skill. Like asking about their involvement during production incidents, then drill down to see if there's anything interesting to focus on. Can probably also be gamed by AI but people are usually surprised about my approach and they often provide good feedback after the call, even if I have to decline their application so I guess it works somewhat well for both since it doesn't force anyone to just recite the same phrases.
The thing that gets me is the disingenuous parallel construction. Just say the truth.
Europe wants to improve its economy by growing their consumer tech industry. Some of these products like Google Analytics (the example he is upset about) are really hard to replicate (writing to a database on every visit to your website is an expensive thing to do, significantly more expensive than hosting the website!). So they've been slowly increasing the tariffs (disguised as privacy regulations) on US tech firms. It's gone poorly, even EU governments (let alone EU businesses) still use products like Google Analytics, and US tech firms have been able to engineer their way around the regulations, again doing a better job than EU governments who have been busted countless times for breaking GDPR with their own systems.
No one cares about any "data sharing agreement" or a "Privacy and Civil Liberties Oversight Board" no one has ever heard of that has never done anything. Its a tariff with various ways to pick winners and losers.
The only thing thats changed is there is a higher chance these privacy regulations will be recognized as tariffs by the US.
What you describe is true, and it can also be counterproductive vecause to be competitive you need the best and cheapest services, and raising the prices doesn't often result in a healthier tech ecosystem. Typical Eurocrat thinking.
But EU citizens genuinely care about privacy, in part because of decades of totalitarian and near-totalitarian regimes.
There is another risk underpinning this, I'm not familiar with this so it's mostly hearsay on my part, but foreign firms in the US routinely get completely screwed in US courts, and fear the seizing of their data in discovery processes or other ways. The data sharing agreement was made to provide some degree of clarity or assurances in this regard.
I've met managers who are convinced that if they're not careful, their IP and business data will get stolen by their US competitors through various legal or less-legal means. EU executives have been detained for days at the border on suspicions of terrorism to coerce them into selling US assets. I can't judge if this is paranoia, and maybe those companies could make use of better protection against Chinese hackers but there's certainly some truth to that.
Are there any news stories about these specific claims (executives held by the US until they divest assets, EU companies losing their data in discovery and being copied)?
The EU's biggest exports to the US are cars & pharma. I guess the VW diesel situation could be seen through that lens, or the GLP1 compounding rules.
I'm hoping that this article acts as a catalyst for the Dutch government, and other EU governments, to move everything away from American clouds.