"The problem solver": asks how to solve a specific error while also posting the stacktrace/error which already contains a message about how to solve the error
I love this one. We’ve all missed something plain as day at some point. I pride myself on RTFM before I start working with something, but I’ve earned this medal a few times.
Can we also start handing out achievements to repos and/or orgs?
I have a few where “Clear As Molasses” is a good one for the README: “Documents something useful in an entirely opaque manner.” (Edit: Maybe requiring you read the code to understand what the README failed to articulate about something basic?)
> [An] U-4701, nachrichtlich [an] U-Stützpunkt Lübeck von Chef 4. U-Flottille: Mit U-4702 und U-4703 zur Flender Werft Lübeck gehen. Von dort folgt Weiteres.
> Translation (preliminary):
> [To] U-4701, for information [to] Submarine Base Lübeck from Chief of 4th Submarine Flotilla: With U-4702 and U-4703 go to Flender Dockyard at Lübeck. From there more follows.
There are about 2^67 different Enigma machine initial settings. The inverse probability of the appearance of a real seven letter German word (LUEBECK) twice in a random string of similar length to this message is a number that's pretty close to 2^67. So if you decrypted one ciphertext message with all the different incorrect settings you might expect to see one purported plaintext which isn't correct but which has two appearances of LUEBECK, or a similarly misleading occurence. Since there's also one correct plaintext, seeing LUEBECK twice already puts you at roughly 50/50 that it's the real message versus the most convincing wrong plaintext (if you had no prior knowledge of what the settings might be). The additional presence of even a few of the other recognizable German words (or common abbreviations such as triple letters and the shortened names for the numbers) makes it overwhelmingly likely that this is the correct plaintext. LUEBECK + LUEBECK + STUETZPUNKT in one message make the chance that it's not the real message of the order of winning a jackpot in state lottery two weeks running, even if the rest of the message was gibberish. In practice, much shorter pieces of plaintext than the double LUEBECK (like the presence of a single triple U, one spelled-out number, or highly abbreviated weather info) were used to validate guessed settings with a high degree of confidence.
That would be true if it actually did the decryption, but my point was, an LLM doesn't decrypt. It just has the encrypted string followed by the decrypted string in its training data and so it outputs something that's almost correct. (the numbers being wrong 4501, 4502, 4503 instead of 4701, 4702, 4703 - maybe some bugged training data, maybe hallucination).
I find it interesting that it pulled some kind of interpretation from the string. Far more than I would have. I asked it to translate the English data back into a similarly plaintext string and then asked a second instance to decode it and it came back with a similar, slightly distorted response.
The point is more to say that a language model is exactly the sort of thing that would be used to determine whether a given potentially decoded plaintext string is actually decoded, and given various anachronisms and shorthands our personal language models may not be adequate.
But a giant one that's been fed all sorts of data including examples of text of similar usage sounds actually like it might be exactly the tool for this problem.
I think the only problem is that they're proprietary. If they were free software that everyone could use then we could compensate by making the problems harder.
It's not really any different to using high-level programming languages with extensive standard libraries versus doing everything in assembly language.
I think it's pretty different from using high-level languages. I'm not interested in a competition that would be decided before even starting by who has the best AI program.
I'm not interested in a competition that is decided by who has the best Python interpreter, but since we all have the same Python interpreter that isn't a problem.
Even if it is free, I have no interest in playing chess against a superhuman chess bot. You don’t even have to know how to play chess to use the moves the bots recommend and win against a grandmaster.
The line is blurry today, but we are moving into territory where humans will not be able to solve programming challenges that require under 200 lines of code faster than AI - we are slower to read and type. The AIs will likely get better at understanding the problems, requiring less help from humans and fewer attempts to find a solution.
At some point using a language model to compete in these kinds of programming contests will absolutely be like using a poker or chess bot to compete in those games.
I mean they basically have a single purpose programming language (AFAIK Flutter is the only relevant use case) and it is taking them years to add ergonomic improvements to the language that would make day to day dev much easier.
Meanwhile they spent colossal effort and broke half of community packages (including a bunch of first party ones) with null safety. I'd say go for low hanging fruit first.
Because the language is so limited it's impossible to solve this via library, unlike say JavaScript.
This reminds me so much of the ClojureScript "win". I love that JS is rounding into shape as a serious language, but the version turmoil! CLJS was "done" when it came out, so the CLJS developer is insulated from any churn.
I know little about Dart/Flutter history, but it sounds like ClojureDart might do a lot for Flutter development if only for CLJD's stability.
I'd rather take sound null safety over records. To say it's taking them years is disingenuous when they've mainly been working on null safety all this time, not twiddling their thumbs.
By that logic, every employee of Apple/Google/Meta or at least the ones working on related projects should be handled a fine every time Apple/Google/Meta gets fined by the EU for breaking the law/abusing its position? The are violating the law after all.
Wait a few days. It may be possible at a later date.
I'm not storing cookies when using GMail and at a time I regularly got those suspicious login type messages when the browser updated to a new version. At one point I had to click a link in the recovery email and enter the month when the account was created. Pretty much guessed several times until nothing worked. Tried again a day or a few later and got in again.
> At one point I had to click a link in the recovery email and enter the month when the account was created.
Seriously?! Who comes up with these security questions? This is such a useless question, on the one hand it's insecure because it is a 1/12 chance of guessing right, but also who remembers what month they created an email account? I would venture a guess most people here couldn't even get the year right (I certainly couldn't). Seems the question is only useful to lock out the legitimate owner.
>on the one hand it's insecure because it is a 1/12 chance of guessing right, but also who remembers what month they created an email account?
IIRC, they require both month and year, so there'd be a bit more guesswork involved. I added the exact creation date for all my Google accounts to my password manager when I learned about this verification method.