RFC7457[1] and Wikipedia[2] offer an overview of many of the attacks on older versions of TLS. Some of those attacks have been mitigated to varying extents in implementations of the affected versions. TLSv1.3 is meant to resolve completely as many of these issues as possible.
When using older protocol versions, it can be complicated to validate that the TLS implementation you are using has the necessary mitigations in place. It can be complicated to correctly configure TLS to minimize the effects of known attacks. Doing that properly requires a fair amount of research, threat modelling, and risk assessment both for yourself and on behalf of anyone accessing your website or service.
IME, TLSv1.2 is still a big chunk of legitimate web traffic. It has been steadily dropping since standardization, and TLSv1.3 is the majority by a wide margin from what I can see. I wouldn't be surprised to see some websites and services still needing to support it for a couple years more, at least, depending on their target audience.
Now you have two different implementations of the same fundamental idea, but they each require different handling. In Go, where many things simply return an error type in addition to whatever value(s), you would now have three different approaches to error handling to deal with as opposed to just whatever the language specified as the best practice.
type Result struct {
Err error
Data SomeType
}
func (r *Result) HasError() bool {
return r.Err != nil
}
func bar() *Result {
...
return &Result { ... }
}
...
result := bar()
if result.HasError() {
// handle result.Err
}
// handle result
I'm not really sure I see the benefit to the latter. In a language with special operators and built-in types it may be easier (e.g. foo()?.bar()?.commit()), but without these language features I don't see how the Result<T> approach is better.
Go can't really express the Result<T> approach. In Go, it's up to you to remember to check result.HasError(), just like it's up to you to check if err != nil. If you forget that check, you'll try to access the Data and get a nil pointer exception.
The Result<T> approach prevents you from accessing Data if you haven't handled the error, and it does so with a compile-time error.
Even with Go's draconian unused variable rules, I and my colleagues have been burned more than once by forgotten error checks.
> without these language features I don't see how the Result<T> approach is better.
That's the point! I want language features!
I don't want to wait 6 years for the designers to bake some new operator into the language. I want rich enough expression so that if '?.' is missing I just throw it in as a one-liner.
A language with sun types will express Result as Success XOR Failure. And then to access the Success, the compiler will force you to go through a switch statement that handles each case.
The alternative is not the Result type you defined, but something along the lines of what languages like Rust or Haskell define: https://doc.rust-lang.org/std/result/
It's interesting that you say this, because I've had the opposite experience. I wouldn't say it's strictly inferior, because there are definitely upsides. If it was strictly inferior, why would a modern language be designed that way -- there must be some debate right?
I love multiple returns/errors. I find that I never mistakenly forget to handle an error when the program won't compile because I forgot about the second return value.
I don't use go at work though, I use a language with lots of throw'ing exceptions, and I regularly miss handling exceptions that are hidden in dependencies. This isn't the end of the world in our case, but I prefer to be more explicit.
> If it was strictly inferior, why would a modern language be designed that way
golang is not a modern language (how old it is is irrelevent), and the people who designed it did not have a proper language design background (their other accomplishments are a different matter).
Having worked on larger golang code bases, and I've seen several times where errors are either ignored or overwritten accidentally. It's just bad language design.
I cannot think of a language where errors cannot be ignored. In go it is easy to ignore them, but they stick out and can be marked by static analysis. The problems you describe are not solved at the language level, but by giving programmers enough time and incentives to write durable code.
Compare to a language with exception handling where an exception will get thrown and bubbles up the stack until it either hits a handler, or crashes the program with a stack trace.
And I was referring to accidental ignoring. I've seen variations of the following several times now:
res, err := foo("foo")
if err != nil { ... }
if res != nil { ... }
res, err = foo("bar")
if res != nil { ... }
> fmt.Println() is blacklisted for obvious reasons
That's the issue with the language, there are so many special cases for convenience sake, not for correctness sake. It's obvious why it's excluded, but it doesn't make it correct. Do you want critical software written in such a language?
Furthermore, does that linter work with something like gorm (https://gorm.io/) and its way of handling errors? It's extremely easy to mis-handle errors with it. It's even a widely used library.
In rust, errors are difficult to ignore (you need to either allow compiler warnings, which AFAICT nobody sane does, or write something like `let _ = my_fallible_function();` which makes the intent to ignore the error explicit).
Perhaps more fundamental: it’s impossible to accidentally use an uninitialized “success” return value when the function actually failed, which is easy to do in C, C++, Go, etc.
> Does any language save you from explicitly screwing up error handling?
It's about the default error handling method being sane. In exception based languages, an unhandled error bubbles up until it reaches a handler, or it crashes the program with a stacktrace.
Compare to what golang does, it's somewhat easy to accidentally ignore or overwrite errors. This leads to silent corruption of state, much worse than crashing the program outright.
That's one point in this discussion. The language allows error handling that way. Compared to a language with proper sum types or exceptions, where one would have to actively work against the language to end up with that mess.
> That's one point in this discussion. The language allows error handling that way. Compared to a language with proper sum types or exceptions, where one would have to actively work against the language to end up with that mess.
I've seen a bunch of code that does the equivalent of the Java I posted above. Mostly when sending errors across the network.
because it has try/catch. Without that (which would be similar to not checking the err in go) it explodes or throws to a layer up that may not expect it.
I would say it is a very ergonomic way of doing this. It allows for writing in a more exploratory way until you know what your error handling story is. Then, even if you choose to propagate it later, you just add it to your signature. Also it is very easy to grok and clear. Definitely not strictly inferior.
It's a lot cleaner to pass a Result<T> through a channel or a slice than to create two channels or slices and confirm everyone's following the same convention when using them.
I concede that there are probably scenarios where this design makes sense within that context. I typically find that either I care about a single error and terminating the computation, or I don't care about errors at all. In the former case, the primitives in the sync package (or just an error channel which we send to once and close) are adequate. The latter case presents no issues, of course.
At $work we definitely have examples where we care about preserving errors, and if that tool were implemented in Go a solution like a Result struct containing an error instance and a data type instance could make sense.
That's still not the problem considered here. You're not asking "does anyone have the key I'm seeing here", you're asking "does this person next to me have the key I'm seeing here". No birthday paradoxes of any kind involved.
Forgive me, as I haven't used signal, but I don't see how whether they are sitting next to you or not changes the problem.
If I can generate a key that hashes to the same value as your key, I can convince anyone I am you. If I can generate a second collision for a third party's key, I can convince you you are talking to that third party, as well. Generating hash collisions is, as I understand it, pretty well modelled with the birthday paradox (and variations like the one I linked). Physical proximity seems entirely unrelated.
Right, sorry, I misunderstood. A preimage attack (that's the technical term for this) could indeed be modeled as a birthday problem with a fixed day ("someone with the same birthday as me"). This is much harder than finding a normal collision (two objects with the same hash, two people with the same birthday), though.
If we're talking about the actual hash signal uses for this value, then sure, but talking about the number of digits displayed isn't even the right thing to care about, since they're using SHA1 for the hash AFAICT: https://github.com/WhisperSystems/Signal-Android/blob/3.0.0/...
SHA1? SHA1SHA1‽ I'd always thought that OWS had incredibly good crypto — why are they using SHA1? If it's to support relatively short hashes … I just can't even.
There's simply no excuse to choose to use SHA1 in 2016. It's not completely broken, it's probably good enough, but why not just truncate SHA2?
SHA-1 is fine in this context. SHA-1 isn't as collision-resistant as it was once thought to be, but that's not a property that you care about for this use-case.
The same principle applies to checksums that are sometimes published for binaries - many still use MD5 or SHA-1 - and that's fine too, as (second) preimage resistance is what counts here, rather than collision-resistance.
A simple-ish way of subsidizing some of that effort is to just make a subreddit for arxiv submissions and link to the comments section from arxiv-sanity for a given paper. You still don't tie into other communities, but if someone has something to say about a particular paper it provides a straightforward mechanism (until the, what, 6 months at which point the submission is archived and can't be voted on or commented on any further). You only need a couple moderators and some strict rules (automoderator rule to only allow submissions from the arxiv-sanity user, etc).
Yeah, I read that comment. I'm hoping they can give us at least some form of tooling around this, though. The inability to do even basic things as a mod seriously sucks.
It's referring to a kind of display driver in which frames are rendered as bitmaps into memory then that bitmap will be rendered to a screen. Fonts for such drivers are, AFAIK, always bitmap fonts.
>What problem is this trying to solve? Having a single webpage with full control of the device, but limiting how many resources the ads on it can take?
Restricting ad resource usage is the only non-niche answer I've imagined. You could make arguments that games could prioritize input/networking over refresh rate using something of this sort, but it seems like there should be a better mechanism for those kinds of applications -- especially since this proposal would require you to separate your JS into multiple subresources to take advantage of it AFAICT.
Assuming their motivation was in fact to restrict resource usage by ads, I'm not sure this is the right way to go about it. Perhaps if your ad network can't serve resource efficient ads you should change ad networks (or pressure your current ad network to improve).
>Banning a single service such as WhatsApp is not a solution to this problem.
Generalizing this argument a bit, banning encryption is also not a solution to this problem. The cat is, as they say, out of the bag, and unless we're going to burn every cryptography book and remove every website documenting cryptographic methods or hosting cryptography code, there's no putting it back [1].
1: Presuming the development of effective post quantum cryptography cannot be prevented and distributed, which considering the current state of PQC seems unlikely.
I think this is worth taking a step further and asking for a definition of cryptography...what is cryptography?
Obviously here we are speaking in a mathematical sense, but encryption of information predates the internet. Hell, it predates electricity. Where do you draw the line? can I not encrypt my conversation with a friend by referencing shared unique experiences?
I was talking to young woman a bunch of us helped get into drug rehab recently and she said when she first moved here she used dating apps to find people who supply drugs. All of sudden, her best friend's name is Molly and going out line dancing mean something completely different on dating sites. I forget what she was calling the different drugs, but like a secret crypto key, they shared a common language.
When using older protocol versions, it can be complicated to validate that the TLS implementation you are using has the necessary mitigations in place. It can be complicated to correctly configure TLS to minimize the effects of known attacks. Doing that properly requires a fair amount of research, threat modelling, and risk assessment both for yourself and on behalf of anyone accessing your website or service.
IME, TLSv1.2 is still a big chunk of legitimate web traffic. It has been steadily dropping since standardization, and TLSv1.3 is the majority by a wide margin from what I can see. I wouldn't be surprised to see some websites and services still needing to support it for a couple years more, at least, depending on their target audience.
[1] https://tools.ietf.org/html/rfc7457
[2] https://en.wikipedia.org/wiki/Transport_Layer_Security#Attac...