the link in the email went to an obviously invalid domain, hovering the mouse cursor over the link in the email would have made this immediately clear, so even clicking that link should have never happened in the first place. red flag 1
but, ok, you click the link, you get a new tab, and you're asked to fill in your auth credentials. but why? you should already be logged in to that service in your default browser, no? red flag 2
ok, maybe there is some browser cache issue, whatever, so you trigger your password manager to provide your auth to the website -- but here, every single password manager would immediately notice that the domain in the browser does not match the domain associated with the auth creds, and either refuse to paste the creds thru, or at an absolute minimum throw up a big honkin' alert that something is amiss, which you'd need to explicitly click an "ignore" button to get past. red flag 3
nobody should be able to publish new versions of widely-used software without some kind of manual review/oversight in the first place, but even ignoring that, if someone does have that power, and they get pwned by an attack like this, with at least 3 clear red flags that they would need to have explicitly ignored/bypassed, then CLEARLY this person cannot keep their current position of authority
> the link in the email went to an obviously invalid domain, hovering the mouse cursor over the link in the email would have made this immediately clear, so even clicking that link should have never happened in the first place. red flag 1
The link went to the same domain as the From address. The URL scheme was 1:1 identical to the real npm's.
> but, ok, you click the link, you get a new tab, and you're asked to fill in your auth credentials. but why? you should already be logged in to that service in your default browser, no? red flag 2
Why wouldn't I be? I don't stay logged into npm at all.
the from: address in every email is an arbitrary and unverified text string that the sender provides, anyone can send an email to anyone else and specify a from: president@whitehouse.gov and that's how it will show up to the recipient
what do you mean by the URL scheme? a URL scheme is the http or https part of it? and for sure the host part of the URL was not the same as the real npm's host part of their URL?
i'm not sure what this comment is trying to accomplish, it parses as FUD
To be fair, Rust could have done CheckedExceptions like Java has but no one uses. The problematic version is the RuntimeException. I think the real problem was that when Rust conceived of Result they didn't constrain the problem to just error handling and made it a little bit too much "anything goes". Which means that trying to shoehorn backtraces in after the fact with `?` and `try_into` is now hard. There could have been a world where `Result::Err` was actually a wrapper type that specified an optional source error for backtracing and the generic type was embedded in that instead. It would have been less flexible but it would have made proper backtraces more tractable.
Is there a language that has proper exclusively checked exceptions? That is, not just syntax sugar around checking an error value (à la Swift), but actual “the processor signals an exception” semantics, but all exceptions are still enforced to be handled-or-passed by the compiler?
Honest question, because I can’t think of any. I can see it being advantageous to have checked-only exceptions but there has to be a good reason why it’s so rare-to-never that we see it.
I’m not sure how else you’d get the holy grail, which I’d define as:
1. The compiler enforces that you either handle an exception or pass it to the caller
2. Accurate and fine-grained stack traces on an error (built-in, not opt-in from some error library du jour)
3. (ideally) no runtime cost for non-exception paths (no branches checking for errors, exceptions are real hardware traps)
C++ has 2 and 3, Java has only 2 (because RuntimeException exists), Rust has only 1. I’d love a language with 1 and 2, but all 3 would be great.
I can't think of any either. A sibling commenter suggests maybe Eiffel but I haven't really tried or looked at that language so I don't know if it's true. I think having all 3 would be great but if I can only choose one of them I personally prefer #1.
This is the "I don't care what fails nor do I wish to handle them" option. Which for some use cases may be fine. It does mean that you don't know what kinds of failures are happening nor what the proper response to them is, though. Like it or not errors are part of your domain and properly modeling them as best you can is a part of the job. Catching at the top level still means some percentage of you users are experiencing a really bad day because you didn't know that error could happen. Error modeling reduces that at the expense of developer time.
Top-level error handling doesn't mean losing error details. When done well, it uses specialized exceptions and a catch–wrap–rethrow strategy to preserve stack traces and add context. Centralizing errors provides consistency, ensures all failures pass through a common pipeline for logging or user messaging, and makes policies easier to evolve without scattering handling logic across the codebase. Domain-level error modeling is still valuable where precision matters, but robust top-level handling complements it by catching the unexpected and reducing unhandled failures, striking a balance between developer effort and user experience.
If you are actually using specialized exceptions and a catch-wrap-rethrow strategy then you are doing error modeling and you aren't "Just letting them bubble up to the top" which is basically making my point for me.
"I don't care what fails" means not catching any exception/error. My comment was the exact opposite of the idea. Top level function will bubble up every exception, no matter how deep or from which module.
But the case when you actually learn what errors can happen is when your users start complain about them, not because you somehow knew about it beforehand.
Or maybe you have 100% path coverage in your test..
So you are talking about bugs that don't get caught in development? That happens in Rust as well. Borrow checker does not catch every bug or error. A random module you are using could throw a panic and you would not know with Rust (or any language for that matter), until your users trigger those bugs.
The problem I think is that all the easy infrastructure problems have been solved and the market is crowded with those solutions. Solving the hard problems is probably where you could have a viable business but I don't really see that many companies trying to solve those:
* Making mono-repos work for large companies.
* Mixed language builds are still a ci/cd unsolved problems for most companies.
It was my understanding that battery fires don't go out because they are basically self fueling. You would be better served having a way to contain it until it burned itself out.
Sometimes I think sockets with a spec for what's on the "wire" is about as good an abstraction as you can get for arbitrary cross-language calling. If you could have your perfect abstraction for cross-language calling what would it be?
Not sure, but it should be message-oriented, rather than stream-oriented. You have to put a framing protocol on top before you can do anything else. Then you have to check that framing is in sync and have some recovery.
I'm currently struggling with the connection between Apache mod_fcgid (retro) and a Rust program (modern). Apache launches FCGI programs as subprocesses, with stdin and stdout connected to the parent via either pipes or UNIX local sockets. There's a binary framing protocol, an 8-byte header with a length. You can't transmit arbitrarily large messages; those have to be "chunked". There's a protocol for that. You can have multiple transactions in progress. The parent can make out of band queries of the child. There's a risk of deadlock if you write too much and fill the pipe when the other end is also writing. All that plumbing is specific to this application.
(Current problem: Rust std::io appears to not like stdin being a UNIX socket. Trying to fix that.)
You could definitely do JSON / msgpack and have like 5 C API functions like, read, write, wake_on_readable, and it wouldn't be the worst thing, and it wouldn't incur any IPC overhead.
The distinction between them and religion is that religion is free to say that those axioms are a matter of faith and treat them as such. Rationalists are not as free to do so.
> WSDL being language-agnostic ensures that bindings in different languages, and/on the client vs the server side, are consistent with each other.
In theory. In reality java could talk to java. M$ stuff could talk to other M$ stuff. And pretty much everyone else was left out in the cold. consistent cross language interop never actually happened despite the claims that it would.
To answer your serious questions, gRPC is actually not a bad choice if you are making at the beginning of your project. Migrating over to it is going to be a challenge if you were using something else because it's pretty opinionated but if you have a clean sheet that's what I would use. Cap-n-Proto or Thrift are also probably good choices. These are all solid RPC frameworks that give you everything you need for that out of the box at the expense of more complicated builds.
So far every single use of mcp's I've seen in the wild is that the response is sent straight to the LLM without doing any validation. Seems reasonable for the Author to expect that when it's exactly what is happening.
reply