Hacker News new | past | comments | ask | show | jobs | submit | Aoreias's comments login

VPC ELBs and ALBs don't support ipv6 though. Presumably this is the first step to adding that support, although it's still not available even in the new ipv6 VPCs.


They just announced during the ELB session that ALBs will be supporting full IPv6 also.


They stayed up all day, they just blocked rendering until they loaded some twitter assets, causing it to take 20-30 seconds to load until those assets timed out.


Most recent vulnerabilities in core utilities really don't have a lot to do with memory safety though - Shellshock & Imagemagick were input sanitization, other common ones though are still injection vulnerabilities or authentication weaknesses. Heartbleed excluded most major vulnerabilities these days aren't related to memory safety.


Sure, but Rust isn't just about dealing with memory safety. The language also lends itself well to solving other common mistakes by virtue of its design and by being built on modern principals. Idiomatic C promotes throwing around pointers/arrays and hoping that the next coder that comes along to consume a struct reads the docs/header and understands how the data in that struct is supposed to be used. Idiomatic Rust uses its type system to strictly enforce how a struct and its data can/should be used. It's a world of difference and results in drastically less bugs. Not to mention the rest of the Rust ecosystem works in harmony with the language to further reduce bugs. Testing as a first class citizen of the language and its tooling is one of the big ones.

That isn't to say that you can't do something similar in C, but it is an order of magnitude more challenging to design a "module" in C that is explicit and robust compared to the effort to do the same in Rust. I've coded my fair share of cryptographic systems in both C and Rust. Bulletproof C is just _exhausting_ to code and work with. The same kind of code in Rust is, dare I say, fun to write. It's just a joy to use Rust's type system to enforce rules and invariants, and then codify those rules in the documentation comments above the structs/functions, and then have "cargo test" actually run the code in that documentation automatically to check it for validity.

And yes, as you point out, some of the big bugs lately have been logic bugs resulting not necessarily from poor code but from poor design. Thing is, the less mental capacity a language requires from a coder the more mental capacity that coder has to use for thinking about the application logic. i.e. in C when you get a string you have to think about how to handle the UTF-8 encoding and what to do about path names that somehow ended up with a non UTF-8 character, and whether the string is NULL terminated or pascal, and is memmove (src, dst), or (dst, src)? In Rust, well, that's all handled, so you think about what the string actually means and, hopefully, you'll realize that hey you should probably sanitize that string so it can't be used to gain shell access from an SVG file.


Well, there are for example all those libbfd-related issues that caused running strings(!) on unstrusted files to be unsafe.


Heartbleed is not a real memory safety bug when program reads beyond allocated memory. It is more of improper reuse of previously allocated buffer and could exist in safe Rust just as well.


You're right, there isn't a classic simple buffer overrun that Rust would trivially catch, but you're missing two things:

1) The problem was really sending back uninitialised memory. In Rust you can't have uninitialised memory. The oversize allocated buffer would have to have initialisation data passed in (possibly zeroes)

2) You'd never write the Rust code like that anyway. The abstractions avaialble mean that you aren't separating the content of some data and the length to pass to allocators.


This isn't necessarily as nefarious as it seems - Blue Coat is going to have to comply with Symantec's Certification Practice Statement(CPS) which prohibits the issuance of MitM certificates. In all likelihood it's to allow Blue Coat to roll out a service that allows it to create certificates for clients of its security services. Any deviation from that CPS would necessitate revoking this intermediate certificate.

That said, I'm quite curious though if Google is going to require that Blue Coat submit all issued certificates to be submitted to Certificate Transparency logs like the rest of Symantec's certificates[0].

[0] https://security.googleblog.com/2015/10/sustaining-digital-c...


I'm hesitant to relax here. Blue Coat's got a nasty history of making money off of the regimes that'd do this without hesitation:

https://www.newsrecord.co/us-based-internet-surveillance-tec...


Sure, but if they started issuing MitM certs ANYWHERE then Symantec would have no choice but to revoke the CA's certificate. It doesn't matter if the CA was functioning for a corrupt regime or a well-intentioned business legitimately MitM'ing employees traffic.

If Symantec didn't revoke the certificate then it would almost certainly lead to their root certificate being untrusted by major browsers and destroy their entire certificate business.


Between the time of issue and renovation, a lot of people can get arrested, monitored, or blackmailed.


that's how the beautiful rule of law works. damage done, people killed, complaint rejected, "overruled".


This is a baby step in the direction of legitimate MITMing of SSL, which is something many of Blue Coat's customers would love. SSL's entire security profile is built around trust in a huge number of CAs, and if Blue Coat and other can persuade one to allow this in any form then SSL is fundamentally and permanently broken (without pinning or out of band checks) for pretty much all users except highly technical ones.


> In all likelihood it's to allow Blue Coat to roll out a service that allows it to create certificates for clients of its security services. Any deviation from that CPS would necessitate revoking this intermediate certificate.

So why doesn't Blue Coat establish their own CA for this purpose?


Are you saying they should become a new root CA? That is a huge amount of work, and would require them to convince all browsers and OS's to make them a root CA, which many would be reluctant to do.


If you're building an interception service (which this could be) - then yes, you build your own Root CA which you install on devices that you want to intercept.

Legitimate uses of this would be things like government or military departments intercepting traffic from their own network.

As explained elsewhere in this thread, they have a history of working with regimes where they want to intercept the traffic of the general public in countries.


No, it would require them to add their cert on the intended machines under their control. The only reasons they would need the trust of all browsers and OS's are subterfuge and laziness. They should not be globally trusted to issue certificates.


"This isn't necessarily as nefarious as it seems" is nowhere near an acceptable level of trust for a certificate authority.


You're making a lot of assumptions about the terms under which this certificate was issued, which you don't know. Without seeing the contract between Symantec and Blue Coat you can't claim that they're bound by Symantec's CPS.

And even if they are bound by Symantec's CPS, they can do a lot of damage before the CPS can be enforced.


The certificate has this in it, which would seem to confirm what you're saying: "Explicit Text: In the event that the BlueCoat CPS and Symantec CPS conflict, the Symantec CPS governs."


It is helpful to store configuration with code, but you don't have to include secret values in your code. It's much better to use a purpose built service like credstash[0] to store secret values while you keep the name of the secret and (possibly) version in a repo with your code.

[0] https://github.com/fugue/credstash


Do you have any examples of poor tenant isolation in AWS, GCE, or Azure?

Cloud complexity is also lower because you don't have to worry about power, cooling, upstream connectivity, capacity budgeting, etc. If 99.9-99.95% availability is fine for your application then you probably don't have to worry about your provider either.


AWS Netflix consumes enough resources that if they spike 40-50% everyone is screwed. The software required to run the cloud like AWS is orders of magnitude more complex then what avg project would need and results in major screwups. Both major AWS outages were due control plane issues second case was result of massive Netflix migration that triggered throttles for everyone in affected AZs. The throttles in the first place were put in due to the major outage that lasted for many hours.


> Do you have any examples of poor tenant isolation in AWS, GCE, or Azure?

I hate to feed a troll, but ...

Noisy neighbors are a problem all the way from sharing a server using VMS to top of rack switches.

An if you try hard enough, you can always escape your VM and "read somebody else's mail."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: