Hacker News new | past | comments | ask | show | jobs | submit | someone13's comments login

I just want to say how cool it is to see you doing a non-trivial review of someone else’s thing here

Why does a CoC have anything do do with a maintainer stepping down due to a technical disagreement?

And the way you word it makes it seem like you think CoCs are bad or “forced”; I don’t particularly want to engage there, but I’d encourage you to reflect on why you think that.



That share link 404s for me, FWIW. I’d be interested in seeing it!


Weird - https://chatgpt.com/share/6701450c-bc9c-8006-8c9e-468ab6f67e... is working for me in a Chrome Incognito window.

Here's a copy of the Markdown answer it gave me: https://gist.github.com/simonw/ffbf90e0602df04c2f6b387de42ac...


It’d also be neat if there was a way to sort by the passmark CPU benchmark score:

https://www.cpubenchmark.net/


As someone that has lived in several countries, and currently in Canada, I will respectfully have to disagree both about your opinion of the CBC and beliefs about government intervention. Some parts of Canadian local/provincial/federal government seem deeply dysfunctional, but some are extremely helpful and better than I’ve seen elsewhere.


You're seeing what remains of a strong and healthy period of Canadian history. But almost every facet is currently under threat and significant stress.

You can't fund a "news" organization with public money and expect them to be an independent agency that holds the government's feet to the fire. They don't do much more than add a thin veneer of objectivity and independent analysis for any significant government initiative.

The article posted above is just a small piece of evidence for the broken nature of the media. There is a torrent more spewing out daily.


Counterpoint: You can't fund a news organization with private money and expect them to hold private organizations' feet to the fire. Any time they are critical of a major business, they lose advertisers.

Personally, I have a problem with the viewpoint that democratic governments are some sort of adversary when in actuality, private businesses are the entities that have no accountability and we have zero direct control over. I agree there should be independent criticism of government, but IMO state media from a democratic government is naturally going to be better overall than only having private media.


Without government picking winners and loser (through protective legislation and regulation) the free market means that honest competition keeps corporations in check. There is an adversarial element inherent in the system. The reason we have so many entrenched monopolies is because they have captured the government.. a government that has no adversary and isn't even properly monitored by the corrupt and complicit media.


Everyone has their own interests in mind and so everyone has to be balanced against each other. Come on people we figured this out hundreds of years ago.


Tell that to companies doing extremely large-scale machine learning. Or any cloud infrastructure provider. Or CDNs. Or literally any video production company that owns a render farm. Or any company doing large-scale media transcoding/streaming.

Maybe "don't have thousands of servers" is just a bad take :)


This is part of what makes this class of bug so bad; it's not something you can "fix" at the IDP without doing exactly this. The issue occurs when a Service Provider (SP) is misconfigured, and in many cases the IDP doesn't actually get any sort of feedback that would let them detect the issue.


Is there a way to audit that kind of misconfiguration?


Yes, tediously, with auditors who understand SAML and the (very informal) literature on SAML attacks. Hence, the concern.


Wouldn't it be enough to enter an invalid audience when configuring the IDP? If the audience is ignored the sign-in flow still allows you to log on and you know the SP is broken.


Sure, but two reasons that's not quite optimal and you might not want to do that in practice:

1. This tells you the SP is broken; just using individual keys means that doesn't matter anymore if the SP is broken or not. Individual keys is in your control, fixing the SP much less likely so. And you can just set up a practice of doing it for everything and now it's one less thing to test for.

2. That still requires a bit of testing that's somewhat annoying to set up, which most vendorsec practices don't have time for. It's also only one of dozens of things you need to test for. Ignoring audiences is super common, but a more subtle problem is that you can sign a valid SAML assertion _for the wrong domain_, and now you can sign in as a competitor's staff.

As you hint at, having an SP that'll just self-service accept any random metadata.xml at least gives you a fighting chance :)


My question was more related to it being tedious. And now you say it requires a bit of testing which is annoying to set up. Isn't testing this just a matter of changing the audience field to something incorrect and try to sign on? This should take like 2 minutes?


If you just change the audience field, the signature will be invalid, so it might tell you the SP won't accept a bad signature, but it doesn't tell you that the SP would accept a correct-signature-for-wrong-audience assertion. And now we've explored two states in that very big tedium space I mentioned; it still doesn't tell you anything about e.g. canonicalization bugs or cross-domain bugs. Those are much harder to test, because they require your IdP to sign specifically crafted assertions malicious, so you can't test them with your standard Okta install or whatever.

So, sure: you can test this one specific bug by replaying an assertion for a different SP. Or you can make your IdP use new key pairs every time and then you're definitionally immune to the entire bug class forever with every SP. Even if replaying the SP takes 2 minutes, getting the tester to a place where they can exploit it takes way longer for most companies, so it's much more effective to just eliminate entire classes of bugs via policy.

TL;DR: you're right (modulo the amount of time) for this particular bug, but why bother? And if you're going to bother testing, why test for this one specific bug that's cheaper to avoid a different way? (I can think of a reason to test; but then the tedium comes in :))


It's a theoretical yes but a practical no, because the amount of people on this planet who understand SAML enough and want to work on it and are available for auditing, is damn close to zero, if not zero.


This is not an outlandish thing to ask of your security auditor _for your own app_. (It's something we do for clients.)

The challenging part is doing it for vendorsec, when you are vetting _other apps_. The timelines that stakeholders (other people in the company who want to use the app) are willing to accept are like, a week, and even if you somehow had a SAML testing praxis at the ready that enumerates all of the problems SAML has historically had, there's a lot more to test than just the SAML bits.

So: in summary: I don't think that number is anywhere near zero, though sure, it's not huge. The hard part is failures being silent and being in parties you don't control.


See Lemon, which generates re-entrant and threadsafe parsers and is used in SQLite: https://www.hwaci.com/sw/lemon/


Shout-out for "AutoSpotting", which transparently re-launches a regular On-Demand ASG as spot instances, and will fall back to regular instances: https://github.com/AutoSpotting/AutoSpotting/

Combined with the fact that you can have an ASG with multiple instance types: https://aws.amazon.com/blogs/aws/new-ec2-auto-scaling-groups...

Means that you can be reasonably certain you'll never run out of capacity unless AWS runs out of every single instance type you have requested, terminates your Spot instances, and you can't launch any more On-Demand ones.

(and even so, set a minimum percentage of On-Demand in AutoSpotting to ensure you maintain at least some capacity)


> runs out of every single instance type you have requested, terminates your Spot instances, and you can't launch any more On-Demand ones.

This is more common than you think.

Internally cloud providers schedule instance types on real hardware, and running out of an instance type likely means they have run out of capacity, and only a tiny amount exists in fragmentation. To access that tiny remainder, they'll terminate spot instances and migrate live users (which they have to do very slowly) to make space for a few more of whichever instance types make most business sense (which varies depending on the mix of real hardware and existing instance types).

It takes someone like AWS a good few weeks, sometimes months, to provision new actual hardware.

It isn't uncommon for big users to be told they'll be given a service credit if they'll move away from a capacity constrained zone.


Is there a similar concept to airline upgrading? Better than to deny a paying customer to board the plane. Surely there must be spare capacity, somewhere in the datacentre, with slightly better specs.


Yes - they totally do that. If there is only space for a large instance, but you want a small one, they fit your small one in the free capacity, and there is now space for someone else to fit another small one next to it.

For business reasons they might decide not to do that though - your small instance might mean they have to say no to a big allocation later.

Instead they just delay your instance starting and hope other instances moving around opens up a more suitable location for it.

Theres an entire paper on the topic: https://dl.acm.org/doi/10.1145/2797211


The AutoSpotting author here, always feels great to see my little pet project mentioned by happy users. Thank you for making my day!

To set matters straight, AutoSpotting pre-dates the new AutoScaling mixed instance types functionality by a couple of years and it (intentionally) doesn't make use of it under the hood for reliability reasons related to failover to on-demand. To avoid any race conditions, AutoSpotting currently ignores any groups configured with mixed instances policy.

In the default configuration AutoSpotting implements a lazy/best-effort on-demand->spot replacement logic with built-in failover to on demand and to different spot instance types. To keep costs down, it is only triggered when failing to launch new spot instances (for whatever reason, including insufficient spot capacity).

What we do is iterating in increasing order of the spot price until successfully launching a compatible spot instance (roughly at least as large as the original from CPU/Memory/disk perspective but cheaper per hour). If all compatible spot instances fail to launch, the group keeps running the existing on-demand capacity. We retry this every few minutes until we eventually succeed.

There's currently no failover to multiple on-demand instance types (this is a known limitation), but this could be implemented with reasonable effort.

We're also working in significantly improving the current replacement logic to address a bunch of edge cases with a significant architectural change(making use of instance launch events). I'm very excited about this improvement and looking forward to having this land, hopefully within a few weeks.

At the end of the day, unlike most tools in this space(including AWS offerings) AutoSpotting is an open source project so if anyone is interested in helping out implement any of these improvements(or maybe others), while at the same time getting experience with Go and using the AWS APIs, which are nowadays very valuable skills, you're more than welcome to join the fun.


Thanks for the shout-out, really appreciate it.

If you don't mind I'd like to get some feedback/feature ideas from users like you.

Please get in touch with me on https://gitter.im/cristim


ASG, per the blog-post you linked to, now supports starting both on-demand and spot instances, so what's the use of AutoSpotting?


The author of AutoSpotting here, this is often being asked and I'm happy to clarify it.

The mixed capacity ASGs currently run at decreased capacity when failing to launch spot instances. AutoSpotting will automatically failover to on-demand capacity when spot capacity is lost and back to spot once it can launch it again.

Another useful feature is that it most often requires no configuration of older on-demand ASGs, because it can just take them over and replace their nodes with compatible spot instances.

This makes it very popular for people who run legacy infrastructure that can't be tampered with for whatever reasons, as well as for large-scale rollouts on hundreds of accounts. Someone recently deployed it on infrastructure still running on EC2 Classic started in 2008 or so that wasn't touched for years.

Another large company deployed it with the default opt-in configuration against hundreds of AWS accounts owned by as many teams, many with legacy instances running for years. It would normally take them years to coordinate as a mass migration but it just took them a couple of months to migrate to spot. The teams could opt-in and try it out on their application or opt-out known sensitive workloads. A few weeks later then they centrally switched the configuration to opt-out mode, converting most of their infrastructure to spot literally overnight and saving lots of money with very little configuration effort and very few disruption to the teams.

If you want to learn more about it have a look at our FAQ at https://autospotting.org/faq/index.html

It's also the most prominent open source tool in this space. Most competition consists of closed-source, commercial (and often quite expensive) tools so if you're currently having any issues or missing functionality, anyone skilled enough can submit a fix or improvement pull request.


Where can I read about some of these more impressive use cases you describe?


Have a look at https://github.com/AutoSpotting/AutoSpotting or the FAQ section on https://autospotting.org

If those don't answer your questions feel free to reach out to me and I'll do my best to explain further.


It replaces on demand instances in-place. If there’s no spot instances, it will leave them running. If the spot instance gets killed, it will start again as on demand.

It sounds a bit hinky, but it tends to leave you with the number of instances you want running without having to determine what percentage of the ASG should be on demand or spot — especially with the possibility of not being able to start new spot instances if they’ve been terminated.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: