Hacker Newsnew | past | comments | ask | show | jobs | submit | lenish's commentslogin

RFC7457[1] and Wikipedia[2] offer an overview of many of the attacks on older versions of TLS. Some of those attacks have been mitigated to varying extents in implementations of the affected versions. TLSv1.3 is meant to resolve completely as many of these issues as possible.

When using older protocol versions, it can be complicated to validate that the TLS implementation you are using has the necessary mitigations in place. It can be complicated to correctly configure TLS to minimize the effects of known attacks. Doing that properly requires a fair amount of research, threat modelling, and risk assessment both for yourself and on behalf of anyone accessing your website or service.

IME, TLSv1.2 is still a big chunk of legitimate web traffic. It has been steadily dropping since standardization, and TLSv1.3 is the majority by a wide margin from what I can see. I wouldn't be surprised to see some websites and services still needing to support it for a couple years more, at least, depending on their target audience.

[1] https://tools.ietf.org/html/rfc7457

[2] https://en.wikipedia.org/wiki/Transport_Layer_Security#Attac...


Most of those attacks require ssl2 or cooperation from client.


Not GP, but I've sometimes found libraries implementing similar concepts differently causing issues.

E.g.

    libraryA.Result struct {
        Err error
        Data SomeDataType
    }

    libraryB.Result struct {
        err string
        Data SomeDataType
    }
    func (r libraryB.Result) Error() string {
         return r.err
    }
Now you have two different implementations of the same fundamental idea, but they each require different handling. In Go, where many things simply return an error type in addition to whatever value(s), you would now have three different approaches to error handling to deal with as opposed to just whatever the language specified as the best practice.


This is what interfaces are for.

Let your caller bring their own error type and instantiate your library code over that.


I'm not sure why you'd use a class like this in Go when you have multiple returns and an error interface that already handles this exact use case.


Because multiple return values for handling errors is a strictly inferior and error prone way for dealing with the matter.


    func foo() (*SomeType, error) {
        ...
        return someErr
    }

    ...
    result, err := foo()
    if err != nil {
        // handle err
    }
    // handle result
vs

    type Result struct {
        Err error
        Data SomeType
    }

    func (r *Result) HasError() bool {
        return r.Err != nil
    }

    func bar() *Result {
        ...
        return &Result { ... }
    }

    ...
    result := bar()
    if result.HasError() {
       // handle result.Err
    }
    // handle result

I'm not really sure I see the benefit to the latter. In a language with special operators and built-in types it may be easier (e.g. foo()?.bar()?.commit()), but without these language features I don't see how the Result<T> approach is better.


Go can't really express the Result<T> approach. In Go, it's up to you to remember to check result.HasError(), just like it's up to you to check if err != nil. If you forget that check, you'll try to access the Data and get a nil pointer exception.

The Result<T> approach prevents you from accessing Data if you haven't handled the error, and it does so with a compile-time error.

Even with Go's draconian unused variable rules, I and my colleagues have been burned more than once by forgotten error checks.


there are linters that will help you with that.

https://github.com/kisielk/errcheck

https://golangci-lint.run/usage/linters/ has a solid set of options.


I just wish the linter was integrated into the compiler. And that code that didn't check would simply not compile


> without these language features I don't see how the Result<T> approach is better.

That's the point! I want language features!

I don't want to wait 6 years for the designers to bake some new operator into the language. I want rich enough expression so that if '?.' is missing I just throw it in as a one-liner.

Generics is one such source of richness.


A language with sun types will express Result as Success XOR Failure. And then to access the Success, the compiler will force you to go through a switch statement that handles each case.


The alternative is not the Result type you defined, but something along the lines of what languages like Rust or Haskell define: https://doc.rust-lang.org/std/result/


It's interesting that you say this, because I've had the opposite experience. I wouldn't say it's strictly inferior, because there are definitely upsides. If it was strictly inferior, why would a modern language be designed that way -- there must be some debate right?

I love multiple returns/errors. I find that I never mistakenly forget to handle an error when the program won't compile because I forgot about the second return value.

I don't use go at work though, I use a language with lots of throw'ing exceptions, and I regularly miss handling exceptions that are hidden in dependencies. This isn't the end of the world in our case, but I prefer to be more explicit.


> If it was strictly inferior, why would a modern language be designed that way

golang is not a modern language (how old it is is irrelevent), and the people who designed it did not have a proper language design background (their other accomplishments are a different matter).

Having worked on larger golang code bases, and I've seen several times where errors are either ignored or overwritten accidentally. It's just bad language design.


I cannot think of a language where errors cannot be ignored. In go it is easy to ignore them, but they stick out and can be marked by static analysis. The problems you describe are not solved at the language level, but by giving programmers enough time and incentives to write durable code.


The following line in golang ignores the error:

    fmt.Println("foo")
Compare to a language with exception handling where an exception will get thrown and bubbles up the stack until it either hits a handler, or crashes the program with a stack trace.

And I was referring to accidental ignoring. I've seen variations of the following several times now:

    res, err := foo("foo")
    if err != nil { ... }
    if res != nil { ... }
    res, err = foo("bar")
    if res != nil { ... }


Usage of linters fixes this:

>The following line in golang ignores the error:

   fmt.Println("foo")
fmt.Println() is blacklisted for obvious reasons, but this:

    a := func() error {
        return nil 
    }
    a()
results in:

    go-lint: Error return value of 'a' is not checked (errcheck)
>And I was referring to accidental ignoring. I've seen variations of the following several times now:

    res, err := foo("foo")
    if err != nil { ... }
    if res != nil { ... }
    res, err = foo("bar")
    if res != nil { ... }
results in:

    go-lint: ineffectual assignment to 'err' (ineffassign)


> fmt.Println() is blacklisted for obvious reasons

That's the issue with the language, there are so many special cases for convenience sake, not for correctness sake. It's obvious why it's excluded, but it doesn't make it correct. Do you want critical software written in such a language?

Furthermore, does that linter work with something like gorm (https://gorm.io/) and its way of handling errors? It's extremely easy to mis-handle errors with it. It's even a widely used library.


Huh, I have seen enough catch blocks in Java code at work which are totally empty. How is that better than ignoring error?


Because it's an explicit opt-in, as opposed to accidental opt out. And static checking can warn you about empty catch blocks.


In rust, errors are difficult to ignore (you need to either allow compiler warnings, which AFAICT nobody sane does, or write something like `let _ = my_fallible_function();` which makes the intent to ignore the error explicit).

Perhaps more fundamental: it’s impossible to accidentally use an uninitialized “success” return value when the function actually failed, which is easy to do in C, C++, Go, etc.


Or .unwrap(), which I see relatively often.


That’s not ignoring errors, it’s explicitly choosing what to do in case of one (crash).


Error handling is hard, period. Error handling in go is no worse than any other language, and in most ways it is better being explicit and non-magic.

> people who designed it did not have a proper language design background

Irrelevant.

> It's just bad language design.

try { ... } catch(Exception ex) { ... }


Exceptions don't lead to silent but dangerous and hard to debug errors. The program fails if exception is not handled.


> try { ... } catch(Exception ex) { ... }

The error here is explicitly handled, and cannot be accidentally ignored. Unlike golang where it's quite easy for errors to go ignored accidentally.


Nevertheless, this is how it is mostly done in Java. I haven't used eclipse in eons, but last time I did it even generated this code.

If you care with go, use errcheck.


Does errcheck work well with gorm (https://gorm.io/) and it's way of returning errors? This is not an obscure library, it's quite widely used.


Does any language save you from explicitly screwing up error handling? Gorm is doing the Go equivalent of:

     class Query {
         class QueryResult {
             Exception error;
             Value result;
         }
         public QueryResult query() {
             try {
                 return doThing();
             } catch(Exception e){
                 return new QueryResult(error, null);
             }
         }
     }
Gorm is going out of its way to make error handling suck.


> Does any language save you from explicitly screwing up error handling?

It's about the default error handling method being sane. In exception based languages, an unhandled error bubbles up until it reaches a handler, or it crashes the program with a stacktrace.

Compare to what golang does, it's somewhat easy to accidentally ignore or overwrite errors. This leads to silent corruption of state, much worse than crashing the program outright.


> It's about the default error handling method being sane.

Gorm isn't using the default error handling.


That's one point in this discussion. The language allows error handling that way. Compared to a language with proper sum types or exceptions, where one would have to actively work against the language to end up with that mess.


> That's one point in this discussion. The language allows error handling that way. Compared to a language with proper sum types or exceptions, where one would have to actively work against the language to end up with that mess.

I've seen a bunch of code that does the equivalent of the Java I posted above. Mostly when sending errors across the network.


because it has try/catch. Without that (which would be similar to not checking the err in go) it explodes or throws to a layer up that may not expect it.

Each language has its wonks.


> Without that (which would be similar to not checking the err in go) it explodes or throws to a layer up that may not expect it.

It's not similar to that at all. Without it, the exception bubbles up until it gets caught somewhere, or crashes the program with a useful stacktrace.

With golang, it just goes undetected, and the code keeps running with corrupt state, without anyone knowing any better.


I would say it is a very ergonomic way of doing this. It allows for writing in a more exploratory way until you know what your error handling story is. Then, even if you choose to propagate it later, you just add it to your signature. Also it is very easy to grok and clear. Definitely not strictly inferior.


It's a lot cleaner to pass a Result<T> through a channel or a slice than to create two channels or slices and confirm everyone's following the same convention when using them.


I concede that there are probably scenarios where this design makes sense within that context. I typically find that either I care about a single error and terminating the computation, or I don't care about errors at all. In the former case, the primitives in the sync package (or just an error channel which we send to once and close) are adequate. The latter case presents no issues, of course.

At $work we definitely have examples where we care about preserving errors, and if that tool were implemented in Go a solution like a Result struct containing an error instance and a data type instance could make sense.



That's still not the problem considered here. You're not asking "does anyone have the key I'm seeing here", you're asking "does this person next to me have the key I'm seeing here". No birthday paradoxes of any kind involved.


Forgive me, as I haven't used signal, but I don't see how whether they are sitting next to you or not changes the problem.

If I can generate a key that hashes to the same value as your key, I can convince anyone I am you. If I can generate a second collision for a third party's key, I can convince you you are talking to that third party, as well. Generating hash collisions is, as I understand it, pretty well modelled with the birthday paradox (and variations like the one I linked). Physical proximity seems entirely unrelated.


Right, sorry, I misunderstood. A preimage attack (that's the technical term for this) could indeed be modeled as a birthday problem with a fixed day ("someone with the same birthday as me"). This is much harder than finding a normal collision (two objects with the same hash, two people with the same birthday), though.


>reducing it to 98 bits?

Did you mean 198?

198 bits is entirely reasonable assuming a brute force attack is the only option. Were it not we'd be in a panic over AES-128 and AES-192. :)


No, hashes generally require twice as many bits in order to avoid birthday attacks — that's why one uses SHA-256 for 128-bit security.

98 bits is still plenty, of course, but it's not 128 bits.


If we're talking about the actual hash signal uses for this value, then sure, but talking about the number of digits displayed isn't even the right thing to care about, since they're using SHA1 for the hash AFAICT: https://github.com/WhisperSystems/Signal-Android/blob/3.0.0/...


SHA1? SHA1 SHA1‽ I'd always thought that OWS had incredibly good crypto — why are they using SHA1? If it's to support relatively short hashes … I just can't even.

There's simply no excuse to choose to use SHA1 in 2016. It's not completely broken, it's probably good enough, but why not just truncate SHA2?


SHA-1 is fine in this context. SHA-1 isn't as collision-resistant as it was once thought to be, but that's not a property that you care about for this use-case.

The same principle applies to checksums that are sometimes published for binaries - many still use MD5 or SHA-1 - and that's fine too, as (second) preimage resistance is what counts here, rather than collision-resistance.


A simple-ish way of subsidizing some of that effort is to just make a subreddit for arxiv submissions and link to the comments section from arxiv-sanity for a given paper. You still don't tie into other communities, but if someone has something to say about a particular paper it provides a straightforward mechanism (until the, what, 6 months at which point the submission is archived and can't be voted on or commented on any further). You only need a couple moderators and some strict rules (automoderator rule to only allow submissions from the arxiv-sanity user, etc).


> opt-in home pages that are tailored at specific audiences. The standard one is pretty low quality.

How is that distinct from multireddits?

> more detection/policing of voting rings and vote fraud in general

One thing that'd help with this is better mod tooling for detecting when it's happening on a reddit you mod.


it'd probably have multireddits underneath, but multireddits don't currently play a part in onboarding

as far as detection tools for mods, https://www.reddit.com/r/ModSupport/comments/4tpla8/_/d5j7uo...


Yeah, I read that comment. I'm hoping they can give us at least some form of tooling around this, though. The inability to do even basic things as a mod seriously sucks.


It's referring to a kind of display driver in which frames are rendered as bitmaps into memory then that bitmap will be rendered to a screen. Fonts for such drivers are, AFAIK, always bitmap fonts.

https://en.wikipedia.org/wiki/Linux_framebuffer


>What problem is this trying to solve? Having a single webpage with full control of the device, but limiting how many resources the ads on it can take?

Restricting ad resource usage is the only non-niche answer I've imagined. You could make arguments that games could prioritize input/networking over refresh rate using something of this sort, but it seems like there should be a better mechanism for those kinds of applications -- especially since this proposal would require you to separate your JS into multiple subresources to take advantage of it AFAICT.

Assuming their motivation was in fact to restrict resource usage by ads, I'm not sure this is the right way to go about it. Perhaps if your ad network can't serve resource efficient ads you should change ad networks (or pressure your current ad network to improve).


>Banning a single service such as WhatsApp is not a solution to this problem.

Generalizing this argument a bit, banning encryption is also not a solution to this problem. The cat is, as they say, out of the bag, and unless we're going to burn every cryptography book and remove every website documenting cryptographic methods or hosting cryptography code, there's no putting it back [1].

1: Presuming the development of effective post quantum cryptography cannot be prevented and distributed, which considering the current state of PQC seems unlikely.


I think this is worth taking a step further and asking for a definition of cryptography...what is cryptography?

Obviously here we are speaking in a mathematical sense, but encryption of information predates the internet. Hell, it predates electricity. Where do you draw the line? can I not encrypt my conversation with a friend by referencing shared unique experiences?


I was talking to young woman a bunch of us helped get into drug rehab recently and she said when she first moved here she used dating apps to find people who supply drugs. All of sudden, her best friend's name is Molly and going out line dancing mean something completely different on dating sites. I forget what she was calling the different drugs, but like a secret crypto key, they shared a common language.


You should read The Code Book sometime


Darmok and Jalad at Tanagra.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: