Hacker News new | past | comments | ask | show | jobs | submit login
Amazon S3 will no longer support path-style API requests (amazon.com)
652 points by cyanbane on May 3, 2019 | hide | past | favorite | 268 comments



One important implication is that collateral freedom techniques [1] using Amazon S3 will no longer work.

To put it simply, right now I could put some stuff not liked by Russian or Chinese government (maybe entire website) and give a direct s3 link to https:// s3 .amazonaws.com/mywebsite/index.html. Because it's https — there is no way man in the middle knows what people read on s3.amazonaws.com. With this change — dictators see my domain name and block requests to it right away.

I don't know if they did it on purpose or just forgot about those who are less fortunate in regards to access to information, but this is a sad development.

This censorship circumvention technique is actively used in the wild and loosing Amazon is no good.

1 https://en.wikipedia.org/wiki/Collateral_freedom


If there is anyone from Amazon caring about freedom of speech and censorship — please contact me at s@samat.me, I'd love to give you more perspective on this.


Hey Samat, I am pretty sure that AWS knows exactly what they're doing. They don't want to lose money by hosting objectionable content, and then lose customers to Aliyun or Russian cloud providers.


They did it not to make blocking in Russia and China easier, but to make their deployment cheaper and faster. Basically with v2 protocol your TCP packets go straight to the server where data is stored without going through one giant proxy. In another words, they do IP routing now instead of HTTP proxying.


Does Aliyun support path-based file storage? That could be handy!


I tested that when writing the S3 implementation of a Go key-value wrapper [1] and back then "Alibaba Cloud Object Storage Service (OSS)" did not support path-style addressing.

If you're looking for a similarly robust and scalable alternative, Google Cloud Storage is accessible via S3 API when enabling that in the bucket's configuration and it supports path-style access (at least back when I tested the different S3-compatible services).

[1] https://github.com/philippgille/gokv


Not sure but what I heard their API is the exact copy or the S3 API, so you can switch from one to other without any effort.


[flagged]


The parent implied nothing about the merits of the change. He/she drew attention to one of the downsides, in a non-accusatory tone. I personally hadn't considered that aspect; maybe folks at Amazon didn't either.

Whether or not it affects Amazon's decision, it's a constructive message, and you're mistaken to dismiss it.


With Amazon being the ones making the change, this situation is asymmetric. It’s on the affected to convince the “affectors,” if you will, that what they are doing is a bad idea. Whether they convince us or not is irrelevant.


> I can think of many such reasons off the top of my head.

What are they, please?


https://en.wikipedia.org/wiki/Domain_fronting#Disabling Interestingly a different although related trick (that of domain fronting) has been blocked last year "by both Google and Amazon.... in part due to pressure from the Russian government over Telegram domain fronting activity using both of the cloud providers' services."


Follow the money :/


The Russian market is tiny, so that logic leads me to the country south-east of Russia. The one with all the new consumers.


This is an interesting perspective.

Just as a counter argument, one of the things we tried to do at a previous employer was data exfiltration protection. This meant using outbound proxies from our networks to reach pre-approved urls and we don't want to mitm the TLS connections. This leaves a bit of a problem, because we don't want to whitelist all of s3, the defeats the purpose, so we had to mandate using the bucket.s3 uri style, which is a bit of a pain for clients that use the direct s3 link style, but then we could whitelist buckets we control.

I don't want to say this use case is more important, but I can see the merits of standardizing on the subdomain style, and that this might be a common ask of amazon.


Exfiltration protection is pointless. It's a great way to waste money and annoy your employees.


Within hours of setting up DLP, I had someone complaining I had broken their workflow. That workflow apparently involved emailing credit card numbers to an external personal mailbox "for security".

It was allowed to go on simply because noone knew about it. Could a skilled attacker spread a card card number across three lines and get past the system? Absolutely. Is exfiltration protection pointless? Absolutely not. Once you scale a certain number of users, you'll find someone somewhere that completely ignores training (which they did have) and decides they don't see the problem with something like this. And you won't know about it until you put a suitable system in.


This proves my point perfectly.

Instead of investing in i.proved tools and productivity for your workers so they dont have to do stupid shit you dont want them to do, you instead made it harder for everyone to do their job.

I'm not saying your business is going to crash and burn. I'm saying you will never be as successful as you could have been. You're literally wasting resources, leaving needs unfulfilled, and giving up ground to your competitors.


Wasting resources by preventing people from sending CC# in clear text to a personal email? That person is lucky they didn’t get fired for doing it


This is not a helpful comment. Could you provide some examples of why it's a waste of money?


Not the commenter, but I think the point is that there are simply too many ways to bypass protections. If you want someone to be able to view data, it is impossible to prevent them from exfiltrating it. In many ways it is similar to the analog hole problem with DRM.

You can make it harder to do on accident, or to prevent someone from doing it for convenience (e.g. someone copying data to an insecure location to have easier access to do their job), but you can't stop a malicious actor from getting the data.

They could tunnel over DNS, they could use a camera phone and record the data on the screen, etc. The possibilities are endless.

https://en.wikipedia.org/wiki/Analog_hole


> You can make it harder to do on accident

That's a major point of exfiltration prevention, both because accidents are a real problem and because reducing the opportunity for accidents makes it easier to establish that intentional exfiltration is intentional, which makes the ability to impose serious consequences for it greater (especially against privilege insiders with key contractual benefits that can only be taken away for cause.)

Technical safeguards aren't standalone, they integrate with social safeguards.


>> You can make it harder to do on accident >That's a major point of exfiltration prevention

But that’s not how it is advertised. Usually they claim is to catch hackers and mal intending employees.


Advertising is 100% bullshit, both in this space and elsewhere. I'm not sure why even well-paid people in positions of corporate power don't seem to get it. It's similar to Gell-Mann amnesia.


> You can make it harder to do on accident, or to prevent someone from doing it for convenience

Why are you discounting these as valuable use cases?

In my experience, they're far more common than the determinedly-malicious actor. And they're far, FAR more common than the malicious actor who also (a) knows that the exfil monitor is there, and (b) has the technical prowess to circumvent it. (The analog hole is only _trivially_ usable for certain kinds of data.)

(I am continually frustrated by the number of people who claim that protection is worthless if it's potentially circumventable. In most situations, covering 90% of attacks is still worthwhile.)


I agree. Locking my front door is trivially circumventable. It is pretty easy to pick up a rock and break a window. Or use a heavy instrument to break down the door. Or heck, a car could just go crashing through the wall. But that lock is a pretty good deterrent from casual abuse. It requires crossing a psychological barrier into the explicitly illegal and malicious realm.


That's a sensible analogy on the surface, but the difference is that having to lock the door doesn't

- end up forcing you to lock it from the inside, and then crawl out the window

- have your friend who is visiting request a door-opening-token 24hr in advance through a JIRA ticket

- cause the power to go out once it's locked, also for security reasons

- force you to replace the keys with 'special' plastic ones from a new third-party vendor

- leave you stranded outside for a few hours because the door-opening system is having an outage

Those are the kind of trade-offs that will be made, not simply the act of locking the door.


It's also the reason why people mail sensitive data to their private mailboxes and that no matter how hard some companies try, they can't make their employees stop using Excel as their primary work tool.

'arcbyte over at https://news.ycombinator.com/item?id=19827012 does have a point - a lot of potential exfil risk is caused by companies doing their best to make it as difficult as possible for their employees to do their jobs.


> I am continually frustrated by the number of people who claim that protection is worthless if it's potentially circumventable. In most situations, covering 90% of attacks is still worthwhile

People are saying that because it's misguided and potentially harmful.

Doing so is security theater, where the solution is scoped down to something incomplete but easier, and then everyone walks away happy they solved 90% of the smaller problem they chose to attempt.

Particularly with things like data exfiltration, this is potentially harmful because then you've organizationally blinded yourself.

Nobody wants to poke holes in their own solution, and so they stop looking.

But, hey, we're catching the odd employee accidentally sharing confidential documents via OneDrive.

Fast forward a year, and an entire DB gets transferred out via an unknown vector, nobody finds out about it for a couple months, and it's all "Oh! How did this happen? We had monitoring in place."

Go big, or run the risk of putting blinders on yourself.


The way I'm reading this, your attitude seems to be "if you can't stop targeted nation state actions you might as well not bother with network security and just run unencrypted wifi everywhere."

Network security is a balancing act between prevention, detection, needed user and network capabilities and cost. If I have unlimited money or no limitations on hindering network usage I can make a 100% secure network - it's not even that expensive, just unplug it all.


>covering 90% of attacks

It doesn't cover most attacks. That's why it's so misleading. It mainly just protection against incompetent people from accidentally sending out data.


Which is what like 80% of data leaks?


> there are simply too many ways to bypass protections

This is not a good general principle, since it can be easily applied in contexts that (I would predict) many same individuals would vehemently disagree with. For example:

- Personal privacy is pointless, there are simply too many ways for governments/corporations/fellow citizens to find things out about you

- Strong taxation enforcement by governments is pointless, there are simply too many avenues for legal tax avoidance and illegal tax evasion

- Nuclear arms control is pointless, the knowledge of how to make a bomb and enrich uranium is widely available (I mean, if NK could pull it off, how hard could it be?)

Maybe data exfiltration prevention isn't a good policy, but I think you need a more nuanced argument than 'there are ways around it'.


All these things work statistically, preventing a certain part of incidents.

They are not a guarantee that an incident cannot happen, though. They can only lower the rate at which incidents (privacy violations, tax evasion, nuclear proliferation) occur.

Same with exfiltration.


I read a great paper talking about this problem once, which made the point that the only actual measure of data security is the bandwidth of possible side-channels vs useful size of the data.


Do you happen to have any pointers about this? I would be very interested.


The parent was probably talking about this paper: https://csyhua.github.io/csyhua/hua-ipdps2018.pdf

Here are a few more (somewhat) related to this topic:

1.Joe Grand, “Advanced Hardware Hacking Techniques”, Defcon 12 http://www.grandideastudio.com/files/security/hardware/advan...

2.Josh Jaffe, “Differential Power Analysis”, Summer School on Cryptographic Hardware http://www.dice.ucl.ac.be/crypto/ecrypt-scard/jaffe.pdfhttp:...

3.S. Mangard, E. Oswald, T. Popp, “Power Analysis Attacks -Revealing the Secrets of Smartcards” http://www.dpabook.org/

4.Dan J. Bernstein, ''Cache-timing attacks on AES'', http://cr.yp.to/papers.html#cachetiming, 2005.

5.D. Brumley, D. Boneh, “Remote Timing Attacks are Practical” http://crypto.stanford.edu/~dabo/papers/ssl-timing.pdf

6.P. Kocher, "Design and Validation Strategies for Obtaining Assurance in Countermeasures to Power Analysis and Related Attacks", NIST Physical Security Testing Workshop -Honolulu, Sept. 26, 2005 http://csrc.nist.gov/cryptval/physec/papers/physecpaper09.pd...

7.E. Oswald, K. Schramm, “An Efficient Masking Scheme for AES Software Implementations” www.iaik.tugraz.at/research/sca-lab/publications/pdf/Oswald2006AnEfficientMasking.pdf

8.Cryptography Research, Inc. Patents and Licensing http://www.cryptography.com/technology/dpa/licensing.html


Thank you very much for the material!


It's hard to prevent data exfiltration by an attacker with physical access to your premises, but it's much easier to prevent exfiltration (especially of large quantities of data) by compromised devices.


When there is a will, there is a way.


Any output device is an output device. A VGA interface. An HDMI interface. A Scroll lock keyboard light. A hard drive interface. A speaker. All you need to do is send the signal down one wire and you could tap into that wire and copy all the data to another system.

Copying files from one folder into another could do the job.


> ... A scroll lock keyboard light ...

Oh, this was a fascinating read! Thanks for encouraging my perusal.

http://staff.ustc.edu.cn/~zhangwm/Paper/2018_10.pdf


I brought it up because I had heard that the original iPODs ROM was extracted using the "click" sound and a microphone. I can't find any reference to it now...

There is also research about using modem lights (even in the background of a room) to figure out what people are doing on their dialup internet connection. Those RX and TX LEDs are actually blinking at your data transmission rate.



Generalising further, signal, channel, reciever.


But that isn't really a counter argument.. if you provide the ability to use both formats, your use case would still work (only provide access to the custom subdomain you control).


you're correct, it's just my experience with this is that certain libraries expected the url format we couldn't accept to be working, and didn't provide an alternative. So having more flexibility in the api can work against you at times is all.


since your employer is willing to invest in this, wouldn't a custom proxy solve your problem? just whitelist the s3 buckets you care about, and have people access s3 through the proxy.


This is based on using a http proxy, it's just the proxy whitelists the domains to connect towards. As for a proxy that can mitm tls connections, I'm not a big fan of this approach, as you ned to add a cert into the trust store of all the machines, which if compromised tends to be a bad day across the board.


Google Reader served a similar purpose. People used its social features for communication since (the thinking went) governments weren't going to block Google.


Teenagers use Google Docs to chat in environments where popular IM applications are blocked, such as schools and libraries.

It's not really the same threat model as people living under dictatorships, but it might just work.


Now that you mention it…

[√] absolute dependence on authorities for food, shelter, clothing, transportation, money.

[√] curfews often in effect for you and your social circle, especially if suspected of deviance.

[√] 24/7 electronic or in-person monitoring is possible and largely accepted.

[√] social circle often molded by authorities.

[√] not allowed to vote or generally exercise political agency (and when allowed it's dismissed).

[√] not allowed to leave your workplace or home without permission from authorities.

[√] possible to flee and seek asylum but it means leaving everything behind for an uncertain future.

[√] indoctrination is so effective you're extremely likely to continue the system when allowed to be an authority.

Good thing it's a benevolent regime.


You're neglecting a key point: primary and secondary education are the province of legal minors. Full legal rights of majors do not apply.

Not that there aren't problems with both P/S education and higher education or public discoure and media generally, though your analysis misses a few key salient aspects and presents numerous red herrings.

J.S. Mill affords a longer view you may appreciate:

https://old.reddit.com/r/dredmorbius/comments/6x7u6a/on_the_...


IMO the even bigger problem is that this literally breaks HTTPS.

AWS S3 will only provide SSL validation if your bucket name happens to not contain "."

Which is a practice encouraged by AWS. [1]

So anyone that has www.example.com as the bucket name can no longer use HTTPS.

[1] https://docs.aws.amazon.com/AmazonS3/latest/dev/website-host...


They do say "We recommend that you do not use periods (".") in bucket names when using virtual hosted–style buckets." in the "bucket restrictions": https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestri...

Still, seems kinda lazy from their part, they could just generate a custom cert.


We have exactly this problem. I would appreciate if somebody explained how we should fix this. We need HTTPS and we have buckets with dots in their names


You have until September 2020 to fix it?

More seriously; it's going to be a nasty migration for anyone who needs to get rid of the dots in their domains. At minimum you need to create a new bucket, migrate all of your data and permissions, migrate all references to the bucket, ...

I mean, if Amazon wanted to create a jobs program for developers across the world, this wouldn't be a bad plan. :(


A simple answer might be "time to move to another static hosting solution".


s3 isn't just used for static hosting. And if you have terabytes of data in a bucket that happens to have a dot in it (that may have been created a long time ago). Your options appear to be not using https, or spending a _lot_ of time and money moving to a new bucket or a different storage system. It seems to me that if Amazon is going to do this, they should at least provide a way to rename buckets without having to copy all of the objects.


It obviously depends on how many files we are talking about but copying files to a new bucket in the same region will not cost that much. You could definitely make the case to AWS that you don't want to pay since they are removing a feature and you might get a concession.

$0.005 / 1,000 copy requests...

ref: https://blog.cloudability.com/aws-s3-understanding-cloud-sto...

Also you will likely want to use some sort of parallel operation. I used this eons ago: https://github.com/mishudark/s3-parallel-put


> Your options appear to be not using https, or spending a _lot_ of time and money moving to a new bucket or a different storage system.

The only way it would be a lot of money to move to a new bucket is if the bucket is hardcoded everywhere. Moving data from one bucket to another is not expensive, and a configuration change to a referenced URL should be cheap, too.


If static hosting is the purpose, just put cloud front in front if it.


Did Amazon invent the rule the cert wildcards only match one level?


I don't believe they did, no.


Collateral freedom doesn't work in China. China has already blocked or throttled (hard to tell which, since GFW doesn't announce it) connections to AWS S3 for years.


Pretty sure Signal and Psiphon use it successfully. Yes it's throttled, but it's usable most of the time.

They probably got an ultimatum from Chinese authorities to either stop allowing this or get blocked entirely.

They just blocked wikipedia last week.. no one is too big to get shutdown in China


Very few people use Signal or Psiphon in China just FYI


One counter example is GreatFire, who use GitHub for their wiki [1].

[1] https://github.com/greatfire/wiki


Funny you mention this one particularly - they didn't really like it [1].

Thinking about it the other way round, how likely would have Amazon been the target of similar attacks?

1: https://arstechnica.com/information-technology/2015/04/meet-...


S3 is hardly the only example of collateral freedom in China. There are many other cases where the concept works.


Definitely worth checking out https://ipfs.io/. Even for those who don't or can't run IPFS peers on their own devices, IPFS gateways can fill much of the same purpose you listed above. Additionally, the same content should be viewable through _any_ gateway. Meaning if a Gateway provider ever amazoned you, you simply make the requests through a new gateway.


Yes, but restrictive governments will have no problem with blocking access to the ipfs.io domain via DNS and by blocking its IP addresses, whereas using the same method for blocking all access to AWS or google cloud is too costly as it will result in collateral damage at home. (Well China can block access to AWS located outside of China because there are AWS Regions in China)


With ipfs anyone can operate an http relay to access the network from any arbitrary IP and/or distribute endpoint IPs to populate the daemons dht if run locally.


Use DNS over HTTP. Firefox is very easily configurable (network.trr.bootstrapAddress, network.trr.mode, etc) so that if you pick the right bootstrap provider and DNS over HTTP provider you'll never send an unencrypted DNS query (including no SNIs) and it will fail completely rather than reverting to your OS's DNS Client if it cannot be resolved via the DNS over HTTP channel you define.

Because the S3 buckets are virtual-hosted they share IPs so there is deniability if you can hide the DNS/SNI.


Yes but https SNI still exists.


This isn't a general-case solution (because you can no longer just give someone a link), but, can't you send "s3.amazonaws.com" or really any other bucket name in the SNI and give the full bucket name in the Host header inside the encrypted channel? Or does S3 block SNI/Host mismatches?


This currently works, but considering their crackdown on domain fronting last year, I don't expect it to work for much longer.


They will possibly block mismatches (I believe they do for cloudfront etc. now), but also if the point of moving to sub domains is sharding, there's no guarantee the bucket you want is behind the faked hostname you connected to.


TLS v1.3 finally addresses this, https://blog.cloudflare.com/encrypted-sni/


TLS 1.3 was completed, published as RFC 8446 but eSNI is still a work in progress.

You need TLS 1.3 because in prior versions the certificate is transmitted plaintext, but eSNI itself is not part of TLS 1.3 and is still actively being worked on as https://datatracker.ietf.org/doc/draft-ietf-tls-esni/


I expect this will only work until the government in question is sufficiently angered that they just outright block the entire AWS infrastructure. Or whoever else supports ESNI.


But the only reason domain fronting works in the first place is because people think that large web hosting providers are too large to block.

If a hypothetical tyrantical government was willing to block all of Amazon S3 this change doesn't affect anything.


If it impacted Amazon’s (or whoever is targeted) bottom line then I would expect they would be open to dropping domain fronting support. But I admit I don’t know this for sure - time will tell.

China has blocked GitHub and Akamai before. https://www.latimes.com/business/technology/la-fi-tn-great-f...


But the alternative, which is to just not force this virtual host change in the first place, similarly might have gotten AWS blocked from those countries anyway


We need this in Chrome, badly.


Did you mean use DNS over HTTPS?


This is similar to domain fronting, which many providers are no longer allowing either.


Would encrypted SNI fix this?

[1] https://blog.cloudflare.com/encrypted-sni/


Yes, nothing man in the middle can do to detect the final domain being connected to.


"collateral freedom" is a failed concept. Many years ago people use Gmail to communicate, and they argue that Chinese government won't dare to block such important and neutral service.

There are like <1% websites in China relies on S3 to deliver static files. Blocking AWS as a whole has happened before. There is simply no freedom was "collateral". Freedom has to be fought hard and eared.


Use Cloudflare or any free CDN service?

Edit: Why am I getting downvoted, it's a legit answer, CDN hides your origin.


Cloudflare is just a disaster for bypassing government censorship. If some website is blocked in my country and I try to access it with Tor or VPN then I better hope it is not behind Cloudflare, because Cloudflare just gives me endless Google captcha instead of the desired website.


That assumes the website actually wants to block abusive traffic. A freedom site won’t.


It's captchas from cloudflare because their country is routinely used for malicious activity.


They do have a Captchas Effectively Off setting, but you are right that it still could trigger: https://support.cloudflare.com/hc/en-us/articles/200170096-H...


I think this is exactly what happened.


[flagged]


Bit of a moot point, since the US has passable safeguards such that you can host your content openly.


"right now I could put some stuff not liked by Russian or Chinese government (maybe entire website) and give a direct s3 link to https:// s3 .amazonaws.com/mywebsite/index.html. Because it's https — there is no way man in the middle knows what people read on s3.amazonaws.com."

Chinese government will just ban the whole s3.amazonaws.com domain. Same as facebook.com, youtube.com, google.com, gmail.com, wikipedia...

However letting them banning sub-domains will actually make S3 a useable service in China. It's a huge step forward.


You could say the same about Dragonfly.


You cannot solve a political problem with a technical solution.


Sure you can. Weapons research is a very common counterexample; people have been solving political problems with technical solutions ranging from sharpening spear-heads to achieving nuclear chain reactions.

(Of course "who is politically right" and "who has the most technical expertise on their side" are at best tenuously related, but that's a different and longstanding problem. If you believe you're politically right and you have technical expertise on your side, use it.)


Are there any non-violent technical solutions? I think we all know that's what that person really meant.


Radio Free Europe is a technical, non-violent solution to a political problem.

Viagra solved tiger poaching political problem.


Encryption, medicine, transportation, food production etc.


Using encryption to hide from a repressive regime may just make you a target.


Encryption also solves the political problem of "I need to communicate plans with my allies across long communications links without my enemies knowing what they are," which can often be opposed to violence (notify people of an opposing military action, evacuate armies or civilians, coordinate a plan to surrender without showing weakness, coordinate a plan to demand the other side surrender by showing so much strength they won't fight, etc.). Much early encryption research was for governments who were already targets to hide data from other governments.


Is there anything lost if you've already felt that you're a target?

Isn't there everything to gain from encouraging everyone to use encryption so that there are too many targets to process?


If encryption is used by most people you can't use it to identify suspicious activity.


Electronic voting machines. Digital signatures on passports. Long-distance communication, whether by telegraph, radio, or satellite. Norman Borlaug's wheat hack. Machine translation. The counting machines that powered the Holocaust (as mentioned, something being a political problem solvable by technical means does not mean it should be solved). Forensic analysis of DNA and fingerprints. Eurovision. Irrigation. Aqueducts. The printing press. I feel like there are many things....


Electronic voting isn't a solution, it's an attack. https://www.youtube.com/watch?v=w3_0x6oaDmI


That's true, but technology can be part of a political solution.

In particular, a political solution requires that people be able to communicate (in order to work together), and technology can be a component of that.


You are correct, but you forget that a technological solution like this one can help bringing around the actual social change you are looking for. It is kinda difficult to bring a social change when your major communication and information distribution methods are gutted.


It's better to approach issues from all sides.


What was the Manhattan Project?


Bitcoin


What kind of company deprecates a URL format that's still recommended by the Object URL in the S3 Management Console?

https://www.dropbox.com/s/zzr3r1nvmx6ekct/Screenshot%202019-...

There are so, SO many teams that use S3 for static assets, make sure it's public, and copy that Object URL. We've done this at my company, and I've seen these types of links in many of our partners' CSS files. These links may also be stored deep in databases, or even embedded in Markdown in databases.

This will quite literally cause a Y2K-level event, and since all that traffic will still head to S3's servers, it won't even solve any of their routing problems.

Set it as a policy for new buckets, if you must, if you change the Object URL output and have a giant disclaimer.

But don't. Freaking. Break. The. Web.


Also in millions of manuals, generated PDFs, sent emails... Some things you just can't "update" anymore.. It's really disastrous change for the web data integrity.


One of the magicians in Las Vegas (the one at the MGM) even used s3 image links in emails to send emails to everyone "predicting" the contents of something that hadn't happened.


David Copperfield does that.


Came here for the same comment. I setup some s3 related stuff less than 2 months ago and the documentation, at least for the js sdk, still recommends the path-style url. I don't even recall a V1/V2 mentioned.

That seems very inconvenient, and is pretty inline with my experience with aws: I guess their services are cheap and good, but oh boy! The developer experience is SO bad.

- So many services, it is very hard to know what to use for what - Complex and not user friendly APIs - coming with terrible documentation

I'm pretty sure they'd get a lot more business if they invested a bit more in developer friendliness - right now I only use aws if a Client really insists on it, because despite having used it a fair amount, I'm still not happy and comfortable with it.


Except S3 storage isn't actually all that cheap anymore. Amazon has literally never changed the price of S3 storage over the last decade even though storage costs have plummeted over the same time period.

There are other providers out there like Digital Ocean Spaces, Wasabi, and Backblaze that offer storage solutions for much cheaper than S3 now.

Digital Ocean Spaces and Wasabi in particular actually use the Amazon S3 api for all their storage. This means you can switch over to either of those solutions without changing the programming of your app or the S3 plugins or libraries that you are currently using. The only thing you change is the base url that you make api calls to.

Backblaze has their own API, but they also offer a few additional features not offered under S3's api.


"Except S3 storage isn't actually all that cheap anymore. Amazon has literally never changed the price of S3 storage over the last decade even though storage costs have plummeted over the same time period."

I don't think that's true ... we (rsync.net) try to very roughly track (or beat) S3 for our cloud storage pricing and we've had to ratchet pricing down several times as a result.

I don't think we just imagined it ...


Please at least get your facts right. They have lowered prices and introduced less expensive options.


Uhh....except that they have cut prices a bunch of times in the last decade and launched cheaper storage like one zone


Out of curiosity what cloud service do you prefer?


To be honest, I don't have much experience with the other cloud services, except Heroku which I wouldnt put in the same category.

I have a DO box for myself and their docs/admin panels are better imo.

But my comment about aws is not really a comparison, just more a comment about my experience as a non-devops engineer, and how I hate having to read their docs.


I think recent years have proven that despite all the memeing about things like rest, "don't break the web" is not a value shared by all of the parties involved.

The takeaway is that for those of us that do still wish to uphold those values, we can let this serve as a lesson that we should not publish assets behind urls we don't control.


Agreed. Cool URIs don't change. [1]

[1] https://www.w3.org/Provider/Style/URI


Amazon explicitly recommends naming buckets like "example.com" and "www.example.com" : https://docs.aws.amazon.com/AmazonS3/latest/dev/website-host...

Now, it seems, this is a big problem. V2 resource requests will look like this: https://example.com.s3.amazonaws.com/... or https://www.example.com.s3.amazonaws.com/...

And, of course, this ruins https. Amazon has you covered for * .s3.amazonaws.com, but not for * .* .s3.amazonaws.com or even * .* .* .s3.amazonaws... and so on.

So... I guess I have to rename/move all my buckets now? Ugh.


That's an interesting contradiction to the rest of their docs. Their docs in other place repeatedly state using periods "." will cause issues. https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestri...

e.g.

> The name of the bucket used for Amazon S3 Transfer Acceleration must be DNS-compliant and must not contain periods (".").

and as you mentioned

> When you use virtual hosted–style buckets with Secure Sockets Layer (SSL), the SSL wildcard certificate only matches buckets that don't contain periods. To work around this, use HTTP or write your own certificate verification logic. We recommend that you do not use periods (".") in bucket names when using virtual hosted–style buckets.

AWS Docs have always been a mess of inconsistencies so this isn't a big surprise. I dealt with similar naming issues when setting up third-party CDNs since ideally Edges would cache using a HTTPS connection to Origin. IIRC the fix was to use path-style, but now with the deprecation it'd need a full migration.

Wonder how CloudFront works around it. Maybe it special cases it and uses the S3 protocol instead of HTTP/S.


> So... I guess I have to rename/move all my buckets now? Ugh.

It's worse than that. You can't rename a bucket. You will have to create a new bucket and copy everything over.


It’s not a huge problem thanks to S3 batch

https://aws.amazon.com/blogs/aws/new-amazon-s3-batch-operati...


I hadn't noticed/heard of this new feature.

Hmm, I was going to say something about the _cost_ of getting/putting a large number of objects in order to 'move' them to a new bucket. Does the batch feature affect the pricing, or only the convenience?


In some cases cross-region replication may help too.

Sadly neither batch operations nor replication is free.


> In some cases cross-region replication may help too.

How so? cross-region replication doesn't replicate existing objects, only new ones.


if you contact AWS they can replicate existing


FWIW - I found it fairly trivial to set up CloudFront in front of my buckets [1], so that I can use HTTPS with AWS Cert Mgr (ACM) to serve our s3 sites on https://mydomain.com [2].

I set this up some time ago using our domain name and ACM, and I don't think I will need to change anything in light of this announcement.

1 - https://docs.aws.amazon.com/AmazonS3/latest/dev/website-host...

2 - https://docs.aws.amazon.com/acm/latest/userguide/acm-overvie...


That isn't a solution for every use case. For example, it means you can't use the s3 VPC gateway for those buckets.


How does using cloudfront for a bucket prevent using VPC endpoint for s3? This doesn't make any sense.


I'm not OP, but if you're using a VPC endpoint for S3, a common use case is so you can restrict the S3 bucket to be accessible only from that VPC. That VPC might be where even your on-site internal traffic is coming from, if you send S3-bound traffic that way.

You could still put CloudFront in front of your bucket but CloudFront is a CDN, so now your bucket contents are public. You probably want to access your files through the VPC endpoint.


The point of the VPC endpoint is that you’ve whitelisted the external services and have a special transparent access to S3.

With a CloudFront proxy you’d have to open up access to all of CloudFront’s potential IP addresses to allow the initial request to complete (which would then redirect to S3). Plus the traffic would need to leave your VPC.


I'm not saying using cloudfront prevents you from using VPC endpoints for s3. I'm saying the workaround of using cloudfront doesn't work if you want to use the VPC endpoint for s3.


Care to elaborate? Do you mean S3 VPC Endpoints? Because this could screw many in-VPC Lambdas that need S3.


yes, that is what I mean. If your bucket name contains a dot, you will no longer be able to access it with https with an S3 VPC Endpoint. (using http or going to cloudfront instead of the S3 VPC Endpoint would still work)


Was curious when someone would bring this up. This has been an issue for such a long time and still the docs are so quiet about it.


isn't that domain name style bucket naming only for hosting a static website from an s3 bucket? otherwise, you can name the bucket whatever you want within the rest of the naming rules.


The point of that is solely for doing website hosting with S3 though - where you'll have a CNAME. Why would you name a bucket that way if you're not using it for the website hosting feature?


Not too long ago, we used S3 to serve large amounts of publicly available data in webapps. We had hundreds of buckets with URL style names. Then the TLS fairy came along. Google began punishing websites without HTTPS and browsers prevented HTTPS pages from loading HTTP iframes.

Suddenly we had two options. Use CloudFront with hundreds of SSL certs, at great expense (in time and additional AWS fees), or change the names of all buckets to something without dots.

But aaaaah, S3 doesn't support renaming buckets. And we still had to support legacy applications anf legacy customers. So we ended up duplicating some buckets as needed. Because, you see, S3 also doesn't support having multiple aliases (symlinks) for the same bucket.

Our S3 bills went up by about 50%, but that was a lot cheaper than the CloudFront+HTTPS way.

The cynic in me thinks not having aliases/symlinks in S3 is a deliberate money-grabbing tactic.


It also comes up when working with buckets of others. Right now if you build a service that is supposed to fetch from a user supplied s3 bucket the path access was the safest.

Now one would need to hook the cert validation and ignore dots which can be quite tricky because deeply hidden in an ssl layer.


How does the S3 CLI handle this? Do they hook cert validation? (I assume they must actually validate HTTPS...)


Pretty sure you get a cert error or they still use paths. Boto (what it’s build on) has an open issue for this for a few years now.



You might be POSTing user uploads to uploads.example.com.

https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPO...


This could still use the CNAME trick though no?


Does anyone have insight on why they're making this change? All they say in this post is "In our effort to continuously improve customer experience". From my point of view as a customer, I don't really see an experiential difference between a subdomain style and a path style - one's a ".", the other's a "/" - but I imagine there's a good reason for the change.


Three reasons -

First to allow them to shard more effectively. With different subdomains, they can route requests to various different servers with DNS.

Second, it allows them to route you directly to the correct region the bucket lives in, rather than having to accept you in any region and re-route.

Third, to ensure proper separation between websites by making sure their origins are separate. This is less AWS's direct concern and more of a best practice, but doesn't hurt.

I'd say #2 is probably the key reason and perhaps #1 to a lesser extent. Actively costs them money to have to proxy the traffic along.


I think they should explain this a bit better. That said

For core services like compute and storage a lot of the price to consumers is based on the cost of providing the raw infrastructure. If these path style requests cost more money, everyone else ends up paying. It seems likely any genuine cost saving will be at least partly passed through.

I wouldn't underestimate #1 not just for availability but for scalability. The challenge of building some system that knows about every bucket (as whatever sits behind these requests must) isnt going to get any easier over time.

Makes me wonder when/if dynamodb will do something similar


So "improving customer experience" is really Amazon speak for "saving us money"


Makes it faster, reduces complexity and would allow them to reduce prices too


Pricing is set by markets based on competitors' offerings. Reduced costs could simply result in monopoly rents.


reduces incentive for them to raise prices


And reduces chances of outages... which is good for both customers and AWS.


Do they not charge for network costs anyway?

A more optimistic view is that this allows them to provide a better service.


They charge for data transfer. They don't charge based on the level of complexity needed for their internal network operations.


Everything is a tradeoff.


With Software defined networking you don't need the subdomain to do that.


Yeah you basically do. Sure you can reroute the traffic internally over the private global network to the relevant server, but that's going to use unnecessary bandwidth and add cost.

By sharding/routing with DNS, the client and public internet deal with that and allow AWS to save some cash.

Bear in mind, S3 is not a CDN. It doesn't have anycast, PoPs, etc.

In fact, even _with_ the subdomain setup, you'll notice that before the bucket has fully propagated into their DNS servers, it will initially return 307 redirects to https ://<bucket>.s3-<region>.amazonaws.com

This is for exactly the same reason - S3 doesn't want to be your CDN and it saves them money. See: https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosti...


I'm not sure you understand how anycast works. It would be very shocking if Amazon didn't make use of it and it's likely the reason they do need to split into subdomains.

Anycast will pull in traffic to the closest (hop distance) datacenter for a client, which won't be the right datacenter a lot of the time if everything lives under one domain. In that case they will have to route it over their backbone or re-egress it over the internet, which does cost them money.


AWS in general are not fans of Anycast. Interesting thread from one of their principal engineers on the topic.

https://twitter.com/colmmacc/status/1067265693681311744

Google Cloud took a different approach based on their existing GFE infrastructure. It does not really seem to have worked out, there have been a couple of global outages due to bad changes to this single point of failure, and they introduced a cheaper networking tier that is more like AWS.


> AWS in general are not fans of Anycast.

I don't think that's true. Route53 has been using Anycast since its inception [0].

The Twitter thread you linked simply points out that fault isolation is tricky with Anycast, and so I am not sure how you arrived at the conclusion that you did.

[0] https://aws.amazon.com/blogs/architecture/a-case-study-in-gl...


Route53 is the exception, compared to Google Cloud where the vast majority of api's are anycast through googleapis.com

It's a good choice for DNS because DNS i a single point of failure anyway, see yesterdays multi hour Azure/Microsoft outage!


Got it, thanks. Are there research papers or blog posts by Google that reveal how they resume transport layer connections when network layer routing changes underneath it (a problem inherent to Anycast)?


I do understand how it works and can confirm that AWS does not use it for the IPs served for the subdomain-style S3 hostnames.

Their DNS nameservers which resolve those subdomains do of course.

S3 isn't designed to be super low latency. It doesn't need to be the closest distance to client - all that would do is cost AWS more to handle the traffic. (Since the actual content only lives in specific regions.)


Huh? If the DNS doesnt see the bucket name how can it hand back the right IP of where the bucket lives?


How does that work? My browser is going to send all requests to the same domain to the same place.


Anycast ip.

You have a sole ip address. All traffix routed to nearest PoP. The PoP makes the call on where and how to route the request.

Lookup google front end (GFE) whitepaper. Or thd google cloud global load balancer

That front end server that lives in the PoP can also inspect the http packets for layer 7 load balancing.

https://cloud.google.com/load-balancing/docs/load-balancing-...


Added to my comment, but basically S3 is not a CDN - it doesn't have PoPs/anycast.

They _do_ use anycast and PoPs for the DNS services though. So that's basically how they handle the routing for buckets - but relies entirely on having separate subdomains.

What you're saying is correct for Cloudfront though.


With SDN the PoP would only need to receive the TCP request and proxy TCP acks.

Raw data could flow from a different PoP that's closer to DC.

Aka user->Closest PoP-> backhaul fiber -> dc->user


Presumably Amazon has PoPs for CloudFront; why couldn't S3 share the same infrastructure?


They could do that, but they have absolutely no incentive to do so - all it would do is cost them more. S3 isn't a CDN and isn't designed to work like one.


It means two hops not one. S3 gets can be cached but then you have a whole host off issues. Better to get to the origin.


One big reason to me: cookie security

Currently all buckets share a domain and therefore share cookies. I've seen attacks (search for cookie bomb + fallback manifest) that leverage shared cookies to allow an attacker to exfiltrate data from other buckets


Cookies support URL path restrictions.


That doesn't prevent unauthorized reading of the cookies. The only way to properly prevent it is using a different domain/subdomain.

https://developer.mozilla.org/en-US/docs/Web/API/document/co...


The only obvious thing that occurs to me is that bringing the bucket into the domain name puts it under the same-origin policy in the browser security model. Perhaps there are a significant number of people hosting their buckets and compromising security this way? Not something I have heard of but it seems possible. Makes me wonder if they are specifically not mentioning it because this is the reason and they know there are vulnerable applications in the wild and they don't want to draw attention to it?


Removing my comments because I can't seem to delete them...


Does it bother you the domain is amazon.com and not com.amazon?


I can't read what you're replying to, but it absolutely bothers me. The current scheme has this completely random double reversal in the middle of the URL; it would have been so trivial to just make it actually big-endian, but instead we have this big-little-big endian nonsense. Far too late to change it now, but it is ugly and annoying.


Probably because they want to improve the response time with a more precise DNS answer.

With s3.amazonaws.com, they need to have a proxy near you that download the content from the real region. With yourbucket.s3.amazonaws.com, they can give an IP of an edge in the same region as your bucket.


I would guess cookies and other domain scoped spam/masking 'tricks'? I've never tried but perhaps getting a webpush auth on that shared domain could cause problems


It’s a known trick for spammers to leverage the amazon domain to rank higher in search rankings.


That's a search engine problem, not a hosting problem.


Virtual and path style share the same domain suffix. It's also *.amazonaws.com, not amazon.com.


Public suffix list: https://publicsuffix.org

s3.amazonaws.com subdomains are as distinct from each other as co.uk subdomains.


I have no visibility into Amazon, but using subdomains let you shard across multiple IP addresses.


Does the "you are no longer logged in" screen not infuriate anyone besides me? There doesn't seem any purpose to it just redirecting you to the landing page when you were trying to access a forum post that doesn't even require you be logged in.

Absolutely mind boggling with as much as they pay people they do something so stupid and haven't changed it after so long.


This is going to break so many legacy codebases in ways I can't even imagine.

Edit: Could they have found a better place to announce this than a forum post?


There is probably a PR document in the process of being released, it's in more than a year after all.


Couldn't they do a redirection (301) to not break code ?


No, because path-style bucket names weren't originally required to conform to dns naming limitations. I don't know how they're going to migrate those older non-conforming buckets to the host-style form.


You're on Hacker News..


I wonder how they’ll handle capitalized bucket names. This seems like it will break that.

S3 has been around a long time, and they made some decisions early on that they realised wouldn’t scale, so they reversed them. This v1 vs v2 url thing is one of them.

But another was letting you have “BucketName” and “bucketname” as two distinct buckets. You can’t name them like that today, but you could at first, and they still work (and are in conflict under v2 naming).

Amazons own docs explain that you still need to use the old v1 scheme for capitalized names, as well as names containing certain special characters.

It’d be a shame if they just tossed all those old buckets in the trash by leaving them inaccessible.

All in, this seems like another silly, unnecessary, depreciation of an API that was working perfectly well. A trend I’m noticing more often these days.

Shame.


One of the weird peculiarities of path-style API requests was that it meant CORS headers meant nothing for any bucket pretty much. I wrote a post about this a bit ago [0].

I guess after this change, the cors configuration will finally do something!

On the flip side, anyone who wants to list buckets entirely from the client-side javascript sdk won't be able to anymore unless Amazon also modifies cors headers on the API endpoint further after disabling path-style requests.

[0]: https://euank.com/2018/11/12/s3-cors-pfffff.html


A similar removal is coming in just 2 months for V2 signatures: https://forums.aws.amazon.com/ann.jspa?annID=5816

This could be just as disruptive.

Difficult to say that they will actually follow through, as the only mention of this date is in the random forum post I linked.


Doubtful, sigv2 is not supported in all regions. So all current software that wants to be compatible with more than a portion of regions has to be compatible.

This is a great way of introducing breaking changes. Imagine that for example, ipv6 would be at near 100% adoption if "new websites" were only available over v6.


Amazon is proud that they never break backwards compatibility like this. Quotes like the container you are running on Fargate will keep running 10 years from now.

Something weird is going on if they don’t keep path style domains working for existing buckets.


Only 10 years. Shame that that is a boast. 100 years would be better.


Is there a deprecation announcement that does not include the phrase "In our effort to continuously improve customer experience"?

Edit: autotypo


Fun fact: The s3 console as of right now still shows v1 urls when you look at the overview page for a key/file.


I was already planning a move to GCP, but this certainly helps. Now that cloud is beating retail in earnings, the ‘optimizations’ come along with it. That and BigQuery is an amazing tool.

It’s not like I’m super outraged that they would change their API, the reasoning seems sound. It’s just that if I have to touch S3 paths everywhere I may as well move them elsewhere to gain some synergies with GCP services. I would think twice if I were heavy up on IAM roles and S3 Lambda triggers, but that isn’t the case.


This is most likely to help mitigate the domain being abused for browser security due to the same-origin policy. This is very common when dealing with malware, phishing, and errant JS files.


`In our effort to continuously improve customer experience` , what's the actual driver here, I don't see how going from two to one option and forcing you to change if you are in the wrong one improves my experience.


http://chainsawsuit.com/comic/2017/12/07/improvements/

> We asked our investors and they said you're very excited about it being less good, which is great news for you!


Reduce complexity for future customers


There are millions of results for "https://s3.amazonaws.com/" on GitHub: http://bit.ly/2GUVjDi


GitHub search is really poor. It is also including the uses of the subdomain style.


Agreed. The search could use some love for doing exact match.

The scale at which different libraries, tools, and systems depending on hard-coded S3 urls will break by this change is insane.


I see a problem when using the s3 library to other services that support s3 but only have some kind of path style access like minio or ceph with no subdomains enabled. it will break once their java api removes the old code.


    ag -o 'https?://s3.amazonaws.com.*?\/.*?\/'| awk -F':' '{print $1, $4}' | sort | uniq | cut -d'/' -f 1 | sort | uniq -c | gsort -h -rk1,1
For anyone interesting in finding out the occurrences in their codebase. (Mac)


AWS API is an inconsistent mess. If you don't believe me try writing a script to tag resources. Every resource type requires using different way to identify it, different way to pass the tags etc. You're pretty much required to write different code to handle each resource type.


This will hopefully prevent malicious sites hosted on v1-style buckets from stealing cookies/localstorage/credentials/etc.


Care to elaborate? Why would there be any secrets stored via s3.amazonaws.com?


I'm so glad I saw this. I would have been very confused when this went live had I not seen this post today. I wish I could upvote this more.


Hm. I had a local testing setup using an S3 standin service from localstack and a Docker Compose cluster, and path-style addressing made that pretty easy to set up. Anyone else in that "bucket?" Suggestions on the best workaround?


Commercial platform breaks things people have built on it for "the sake of continuously improving customer experience. "

Also: see photos of your favorite celebrity walking their dog and other news at 11.


So much for customer obsession.


I don't think "never change" is customer obsession. Improving products is customer obsession.


this goes beyond "never change". never changing your product is a bad thing, but never changing your URLs is a mantra everybody should live by.


https://github.com/search?q=%22https%3A%2F%2Fs3.amazonaws.co...

Over a million results (+250k http). This is going to be painful.


TL;DR

Migrate

from: s3.amazonaws.com/<bucketname>/key

to: <bucketname>.s3.amazonaws.com/key

no later than: September 30th, 2020


For other folks looking for announcement feeds, see https://forums.aws.amazon.com/rss.jspa - announcements are the asterisks.


How does this impact CloudFront origin domain names? I have an s3 bucket as a CF origin and the format the AWS CF Console auto-completes to is:

<bucket>.s3.amazonaws.com

Do I need to change my origin to be, Origin domain name: s3.amazonaws.com, Origin Path: <bucket>

This is a sneaky one that will bite lots of folks as it is NOT clear.


I think you have this backwards.

<bucket>.s3.amazonaws.com is the V2 url formula.


"In our effort to continuously improve customer experience, the path-style naming convention is being retired in favor of virtual-hosted style request format. Customers should update their applications"

How does forcing customers to rewrite their code to confirm to this change, improve customer experience?


Maybe as the technical debt continues to come due with the current architecture, it's time to make the hard choices to keep a good customer experience?


It's on amazon to pay off their technical debt. Not their customers. They are turning off a feature at their customers expense.

That's the exact opposite of good customer service.


IMO, this is an improvement - it makes it clear that the bucket is global and public, whereas with the path you could believe that it was only visible when logged into your account.

It also helps people understand why the bucket name is restricted in it's naming.


> it makes it clear that the bucket is global and public

How does it do that? You can host a private bucket at foo.s3.amazonaws.com just fine.


I think the claim is that the namespace is global and public, i.e., you and I can't both have buckets named "foo". There is only one S3 bucket named "foo" in the world.

If it's https://s3.amazonaws.com/foo/ you could believe that it's based on your cookies or something, but if it's https://foo.s3.amazonaws.com/ it's more obvious that it's a global namespace in the same way DNS domain names are (and that it's possible to tell if a name is already in use by someone else, too).


This will break software updates for so many systems, probably even some Amazon devices.


Always confused me how they had two different ways of retrieving the same object. Glad that they're sticking to the subdomain option. Sucks to go back and check for old urls though. This change might break a good chunk of the web.


One way to do this without breaking existing applications would be to charge more for the path style requests for a while. Then deprecate once enough people have moved away from it, so that less people are outraged by the change.


> In our effort to continuously improve customer experience, [feature x] is being retired

In this case, the most highly improved experience I can think of eould be that of sundry nefarious entities monitoring internet traffic.


Does anyone know if this will affect uploads? We are getting an upload URL using s3.createPresignedPost and this returns (at least currently) a path-style url...


The title is misleading. Path style request "/foo/bar/file.ext" are still supported.

What changes is that the bucket name must be in the hostname.


Path style can be used in hostnames?


I switched to MinIO for anything new. Happy user - https://min.io/



Anyone know if this will affect the internal use (e.g. EMR) s3 schema: s3://bucket/path/key?


No. The file system implementation uses the AWS S3 client, which will automatically use virtual-host style when possible (if the bucket name supports it).


No it won't


Hmm I don’t understand why this change is happening. What does this gain? Removal of tech debt?


I didn't know path style was possible.

I'd have found it really useful. :-/


They should produce a free redirect service at least.


Boo. Now old packages won't work.


This is going to be the Y2K of September 2020.



Does that mean people still have tons of public-by-mistake s3 buckets because of their clumsy UI, and they just gave up and are swiping what's left under the rug?


They're not public by "mistake". They're public because someone was lazy, wanted to share something and didn't want to bother with authentication. They knew they were making it public but figured no one would find it except the desired recipient so why put in the effort to make a better solution.


This wouldn't have any impact on that issue. Those accidentally-public buckets are already accessible in both the path-based and the virtualhost-based method.


You really have to try to make a bucket public. Even when you do, you get warnings within the UI, and there is a column showing you it’s public.

Is there even a UI option to make a bucket public anymore? I always edit the bucket policy and add the JSON to make it public read only.


A lot of that is a fairly recent development, though. Lots of buckets laying around from the years it largely didn’t warn you.


I'm kind of shocked at some of the responses here... everything from outrage, to expressing dismay at how many things could break, to how hard this is to fix, to accusing Amazon of all kinds of nefarious things.

How hard is it for 99% of the developers and technical leaders here to search your codebase for s3.amazonaws.com and update your links in the next 18 months?


> How hard is it for 99% of the developers and technical leaders here to search your codebase for s3.amazonaws.com and update your links in the next 18 months?

I've got a number of hobby projects, some hosted on AWS, that I built ages ago. I have no idea how this change will effect those projects because ... I just frankly don't remember the codebases. I built them on a weekend, set them up, and now just use them.

It isn't the end of the world. But I'm not really excited about having to dig up old code, re-grok it, and fix anything that changes like these might break.

I suppose that's just the nature of a developer's life. But I think many of us long for a "write once, run forever" world. Horror stories about legacy software aside, it was nice to be able to write software for Windows and then have it work a decade later.


I suppose that's just the nature of a developer's life. But I think many of us long for a "write once, run forever" world.

Well, I think AWS developers are in the same boat right? Here we are.

An architectural decision that many years ago was the approach now needs to be rethought and updated.


How is that the same boat? Seems like the opposite.


It's a reasonable timeframe, but not all codebases are actively maintained. In addition, it's concievable that there's some hidden custom library somewhere that crafts S3 URIs, making it near impossible to simply grep for a certain URI type in the codebase. So people may have to scour codebases they don't even maintain to look for random code which may craft an S3 URI in a certain way, then fork that project, fix the functionality, publish it, and use the fork. Then they may need to fork every other project that uses that original project, and do the same thing ad infinitum. If this is a private company, they have to do all that within some corp-wide globally available private repo, which either means (1) making this repo public on the internet, or (2) adding it to every security group they have that pulls code. It may even require adding direct connect or privatelink. So that means a long research project, followed by a project of fixing, testing, and releasing new code, followed by a project get network access to the custom repos and change software to use them.

So, surprisingly hard, but doable. And from the customer's perspective, a huge pain in the ass, just to save Amazon some pennies on bandwidth.


You're assuming that most code is actively maintained. It isn't. This is going to break so much older stuff.


While true, I don't think backwards compatibility forever is the right solution to the old code problem.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: