Hacker Newsnew | past | comments | ask | show | jobs | submit | adamcharnock's commentslogin

I left a comment fairly related to this a while back:

https://news.ycombinator.com/item?id=43465403

But there _is_ also an attitude difference. In terms of willingness to take risks and innovate, the USA does do very well for itself, and I think the UK does ok too. But that cannot be said for the EU countries I’ve lived in. Stable reliable long term safe jobs seem to be more the name of the game, and starting a company is seen as a big and risky commitment. Whereas in the US and UK you can start a company in your lunch break.

It is a generalisation, and I’m part of a wonderful entrepreneurial community here in Munich. But even there everyone says how risk averse European businesses are. I really really wish it wasn’t true.


This may be blindingly obvious, but I’m going to say it anyway: If Amazon was willing to actually give up control of AWS EU, then this kind of announcement would be entirely surplus to requirements. But they will (obviously and rationally) not be giving up control of AWS EU because that would essentially have to be an act of charity, so they need to dress it up a bit.

(Before hitting ‘add comment’ I’m taking a moment to consider if I’m being overly cynical. But no, I really don’t think I am. But my company does compete with AWS, so that is a bias.)


Well it wouldn't have to be charity. They could just divest or sell it.

It would be a dumb move though because they need a worldwide CDN for customers from other countries outside the EU too.


I think this is a really interesting point. I have a few thoughts as a read it (as a bit of a grey-beard).

Things are moving fast at the moment, but I think it feels even faster because of how slowly things have been moving for the last decade. I was getting into web development in the mid-to-late-90s, and I think the landscape felt similar then. Plugged-in people kinda knew the web was going to be huge, but on some level we also know that things were going to change fast. Whatever we learnt would soon fall by the wayside and become compost for the next new thing we had to learn.

It certainly feels to me like things have really been much more stable for the last 10-15 years (YMMV).

So I guess what I'm saying is: yeah, this is actually kinda getting back to normal. At least that is how I see it, if I'm in an excitable optimistic mood.

I'd say pick something and do it. It may become brain-compost, but I think a good deep layer of compost is what will turn you into a senior developer. Hopefully that metaphor isn't too stretched!


I’ve also felt what GP expresses earlier this year. I am a grey-beard now. When I was starting my career in the early 2000’s a grey-beard told me, “The tech is entirely replaced every 10 years.” This was accompanied by an admonition to evolve or die in each cycle.

This has largely been true outside of some outlier fundamentals, like TCP.

I have tried Claude code extensively and I feel it’s largely the same. To GP’s point, my suggestion would be to dive into the project using Claude Code and also work to learn how to structure the code better. Do both. Don’t do nothing.


Thx to both of you, I think these replies helped me a bit.

I think this is more akin to shutting down an AZ, rather than a region, which certainly does happen. Except AZ lettering is randomised per AWS account, so your 'us-east1-a' isn't my 'us-east1-a'. Which means AWS can migrate people away, over time. I believe older accounts which still use the old AZ are given notice that it is closing.

Plus, there was the whole closure of AWS EC2 Classic, replaced with AWS VPC.

This another reason why deploying over multiple AZs has its benefit. Not just for technical failure, but it means you can still move should one region close down.

I suppose an interesting questions is: would I prefer to move a single-AZ deployment such as this in the cloud, or in the real world? And honestly I can see the pros and cons of each.

In the cloud it involves a bunch of engineering time (possibly minimal, likely a lot more given reality). In the real-world it involves a temporary fibre connection to the next DC over, and a gradual or rapid move of hardware with the help of some specialist contractors (for example). But at least the state and implementation quirks move with the compute. I can see it either way, but I can feel myself wanting to believe in the latter. There is something about trucking servers across town that appeals me.


It's always either DNS or MTU.

(Or, as I recently encountered, it can also be a McAfee corporate firewall trying to be helpful by showing a download progress bar in place of an HTTP SSE stream. I was sure that was being caused by MTU, but alas no.)


> 2-5x pipeline performance at 1/2 cost just by using self-hosted runners on bare metal rented machines like Hetzner

This is absolutely the case. Its a combination of having dedicated CPU cores, dedicated memory bandwidth, and (perhaps most of all) dedicated local NVMe drives. We see a 2x speed up running _within VMs_ on bare metal.

> And knowing how to deal with bare metal/utilize this kind of compute sounds generally useful skill - but I rarely encounter people enthusiastic about making this kind of move

We started our current company for this reason [0]. A lot of people know this makes sense on some level, but not many people want to do it. So we say we'll do it for you, give you the engineering time needed to support it, and you'll still save money.

> I just don't see why going bare metal is always such a taboo topic even for simple stuff like builds.

It is decreasingly so from what I see. Enough people have been variously burned by public cloud providers to know they are not a panacea. But they just need a little assistance in making the jump.

[0] - https://lithus.eu


An interesting idea! I suspect a major speed up would come from the fact that the column is staying the same size. So (I assume) far fewer bytes would need to be moved around.


I did a little digging into this just yesterday. The impression I got was that Claude Code was pretty great, but also used a _lot_ more tokens than similar work using aider. Conversations I saw stated 5-10x more.

So yes with Claude Code you can grab the Max plan and not worry too much about usage. With Aider you'll be paying per API call, but it will cost quite a bit less than the similar work if using Claude Code in API-mode.

I concluded that – for me – Claude Code _may_ give me better results, but Aider will likely be cheaper than Claude Code in either API-mode or subscription-mode. Also I like that I really can fill up the aider context window if I want to, and I'm in control of that.


> I concluded that – for me – Claude Code _may_ give me better results, but Aider will likely be cheaper than Claude Code in either API-mode or subscription-mode.

I'd be pretty surprised if that was the case - something like ~8 hours of Aider use against Claude can spend $20, which is how much Claude Pro costs.


Indeed, I think I came to the incorrect conclusion! Just signed up for a subscription after getting through quite a lot of API funds!


ISO27001 doesn't specifically require disk encryption. Rather it requires data on disks be protected according to how it is classified. Disk encryption is one way to achieve this, especially in a shared-hardware environment.

In this case, the disks being in a ISO27001 data centre with processes in place to ensure erasure during de-provisioning (which Hetzner is, and has), may well also meet this criteria.


This is something we're [1] seeing a lot of interest in. I wouldn't say it is the driving factor, but it is a driving factor that's giving quite a lot of companies the incentive to finally push the 'Leave AWS (et al)' button.

Even so, two of the major hurdles we see companies facing are:

1. Skills/Training/Hiring – Converting a staff of engineers familiar with AWS/Azure/etc to a new provider isn't necessarily straightforward.

2. Migration & disruption – Untangling one's integration with AWS/Azure/etc, finding and testing replacement services, planning the migration, executing on the migration. All this can cause disruption and delays in actually working on what's important.

What we do is provide multi-AZ bare-metal Kubernetes deployments onto EU providers (we default to Hetzner, but are flexible, and can do on-prem). As part of this we: a) include monthly DevOps engineering time dedicated to each client, and b) handle the migration planning and execution.

We're really trying to help companies (particularly SMEs & startups) make the jump. We try to mitigate the skills issue by providing actual engineers integrated with your team. We try to minimise the disruption by handling the migration in parallel to ongoing development cycles/sprints.

If anyone wants to know more you can reach me at adam@ domain. I hope this was interesting and not too much of a pitch.

[1]: https://lithus.eu


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: