Hacker Newsnew | past | comments | ask | show | jobs | submit | 2d8a875f-39a2-4's commentslogin

Yup, we've all been cornered at a party by that dude, the one who has all these great contrarian ideas he got from reading books. And listening to podcasts. And reading blog posts like this one.

Reading books is an excellent idea. Reading books for the reasons in this blog post is a terrible idea.

Yeah, MPESA was and is still an incredible product. A real lightning-in-a-bottle moment. Even if they've struggled to replicate that success in other geos, the original vision and execution back in the 200x's is a textbook case that bears study.

The npm ecosystem's approach to supply chain security is criminally negligent. For the critical infrastructure that underpins the largest attack surface on the Internet you would think that this stuff would be priority zero. But nope, it's failing in ways that are predictable and were indeed predicted years ago. I'm not closely involved enough with the npm community to suggest what the next steps should be but something has to change, and soon.


> we’ve eliminated resilience wherever we could, because it’s more cost-effective. Resilience is expensive.

You are right. But alas, a peek at the AMZN stock ticker suggests that the market doesn't really value resilience that much.


Stocks stopped being indicative of anything decades ago though.


Echoing a lot of users ITT, Windows has been good to me but the enshittification has reached what feels like the end point.

Windows value to me was "everything just worked". But that's no longer the case now, unless you are willing to walk down Microsoft's centralized rails. Need an MS Account and OneDrive... need expensive modern hardware... get ads and crapware... get telemetry and data exfiltration. The effort of working around all that is non trivial. EDIT: and if I was ok with all that stuff I'd already by captured by Apple.

If I have to fuck around with something in my home OS, that OS might as well be Linux. So now I am compiling wifi and printer drivers from github (FFS Linux!) instead of disabling telemetry and hacking an install with local accounts only.

The challenge, as always, is going to be taking the family with me.


Sounds about right.

These kids have been on camera since they were in the womb. The delivery had a pro videographer. Parents had baby monitors with a video feed, later a nanny cam. Schools had cameras in the classrooms and busses from before first grade. Higher grades onwards all their peers had smartphones and social media accounts.

Some middle aged dude who doesn't want to be on video makes no sense to them, like that weird uncle of yours who in 2010 had no phone or email address.


TFA doesn't make it clear to me how the line is drawn from "bunch of mates playing a pro-sports seasonal drafting game and spice it up with a buy-in pot" to "you sit alone in front of an online skinner box and lose money on opaque spot wagers on ongoing pro sports events".

One thing is very different from the other.


>> ...summarize these rules into one sentence, it’d be: >> Prefer global UI consistency over local optimizations.

I weep for our field, because this should be true beyond the confines of each app.

Instead even on the desktop we have hoards of local "optimisations" - one in each bloated Electron POS.

There's a place for novelty UI - entertainment.

Productivity should have no place for it.


I think the only place left for consistency are Emacs, TUI (but 256 colors and 24 bit enthusiasts are encroaching that), desktop environments like GNOME and KDE.

The most sensible approach is one by mpv. Ship the core logic of your app in a bundle/library. And everyone can build the environment specific UI for that.


It's a stretch to pin blame on Microsoft. They're probably the reason the service is still up at all (TFA admits as much). In hindsight it's likely that all they wanted from the purchase was AI training material. At worst they're guilty of apathy, but that's no worse than the majority of npm ecosystem participants.


"In hindsight it's likely that all they wanted from the purchase was AI training material."

Microsoft already owned GitHub. I don't see how acquiring npm would make a meaningful difference with respect to training material, especially since npm was already an open package repository which anyone could download without first buying the company.


Not all NPM packages are hosted on github. I don't know what the number is, but I know I don't have my NPM packages on github (instead, bitbucket).


It’s NOT a stretch to blame Microsoft. How many billions have we spent chasing “AI”? These issues could have been easily solved if we spent the consideration on them. This has been going on well over a decade.

Microsoft isn’t any better steward than the original teams.

This issue has happened Plenty under Microsoft’s ownership.


Yeah, easily solved.

Would love to hear your genius solutions right here that Microsoft is too dumb to come up with and implement.


Well, from recent experience they could make “npm audit” usable without having to use a third party library like “better npm audit”. There’s no filtering or configuration at all. There are so many unimportant or irrelevant vulnerabilities reported that I have no doubt that people just ignore auditing because they don’t consider the 1000 high severity DoS vulnerabilities they can’t ignore relevant for their CLI app. =/

The tradeoff for security is usability and the worse the usability gets the more people fight back against it.

https://www.npmjs.com/package/better-npm-audit


Hate to tell ya but package signing is not a new problem and they could make it opt-in. There has been a Github issue and merge request submitted to enable it. But they were closed and denied. Malice or incompetence?

Hilarious that you think this is a some sort of impossible feat for a trillion dollar company.


Seriously? This is is extremely low hanging fruit that's not being taken care of. You shouldn't be able to take over a software dependency with a phishing email. Requiring simple PGP code signing or even just passkey authentication would eradicate that entire attack vector.

Future attacks would then require a level access of access that's already synonymous with "game over" for all intents and purposes (e.g. physical access, malware, or inside job). It's not bulletproof but it would be many orders of magnitude better than the current situation.


As long as you can publish a package in a CI environment (which is essential), none of what you mentioned matters. And that's not even the point.

That phishing email is just one of the ways attackers use to infiltrate, which is not Microsoft's problem to begin with. Next time, the attacker could install malware in your machine that silently runs code and publish a package on your behalf using your own credentials stored locally while you think everything is ok, and you'd still blame Microsoft for not doing enough.


> Next time, the attacker could install malware in your machine

I already addressed this in the previous comment but I hope you realize the absurdity of this statement. If the attacker can corner you in a dark alley they can steal your yubikey and beat the PIN out of you, too. By that logic is 2FA futile and should we all stop using it?

Security isn't binary, simply raising the bar from falling for a phishing email to gaining access to someone's machine will probably eliminate 99% of all compromises.

> and you'd still blame Microsoft for not doing enough

Gaining access to someone's machine is definitive "game over" scenario, using that as an excuse not to harden security to the point that that's the only option left is lazy and irresponsible. Even with that kind of access, code signing will slow the viral spread way down which would make a difference.

Once you make it hard to hijack packages, time will be better spent on investing in sandboxing which also protects people from insider threats.


You should indicate whether or not you work for or somehow are affiliated with Microsoft.


i would contend that they are no worse than the original teams, who also clearly didn't care. their motivations may have been growth rather than AI training data but the outcomes were the same


Microsoft has money to fix the problem is the difference. Neither side (og team, Microsoft, or npm consumers) has any capital interest in the matter.


> It's a stretch to pin blame on Microsoft. They're probably the reason the service is still up at all.

I reckon that the ecosystem would have been much healthier if NPM had not been kept running without the care it requires.


I did wonder about that. Maybe yeah. It's likely that several no-better forks would have sprung up right away.


Maybe it's just me but I find the complaint confusing and the suggested remedy absent in TFA, despite reading it twice.

Data comes from outside your application code. Your algorithms operate on the data. A complaint like "There isn’t (yet?) a format for just any kind of data in .class files" is bizarre. Maybe my problem is with his hijacking of the terms 'data' and 'object' to mean specific types of data structures that he wants to discuss.

"There is no sensible way to represent tree-like data in that [RDBMS] environment" - there is endless literature covering storing data structures in relational schemas. The complaint seems to be to just be "it's complicated".

Calling a JSON payload "actual data" but a SOAP payload somehow not is odd. Again the complaint seems to be "SOAP is hard because schemas and ws-security".

Statements like "I don’t think we have any actually good programming languages" don't lend much credibility and are the sort of thing I last heard in first year programming labs.

I'm very much about "Smart data structures and dumb code works a lot better than the other way around" and I think the author is starting there too, but I guess he's just gone off in a different direction to me.


> "There is no sensible way to represent tree-like data in that [RDBMS] environment" - there is endless literature covering storing data structures in relational schemas. The complaint seems to be to just be "it's complicated".

Ya, this one really confused me. Tree-like data is very easy to model in an RDBMS in the same way it is in memory, with parent+child node pointers/links/keys.

It's possible he was thinking of one of these challenges:

1-SQL doesn't easily handle the tree structure. You can make it work but it's square peg round hole.

2-He mentioned JSON so maybe he was really thinking in terms of different types of data for each node. Meaning one node might have some text data and the next node might have a list or dictionary of values. That is tougher in RDBMS.


My main complaint with SOAP was how leaky an abstraction it inevitably is/was - it can look seductively easy to use but them something goes wrong or you tried to use it across different tech stacks and then debugging would become a nightmare.

When I first encountered RESTful web services using JSON the ability to easily invoke them using curl was such a relief... (and yes, like lots of people, I went through a phase about being dogmatic about what REST actually is, HATEOAS and all the rest - but I've got over that years ago).

NB I also am puzzled as to the definition of "data" used in the article.


Sure, SOAP was often awful, I agree with that. But I can't see any angle where one can credibly assert that a SOAP XML payload isn't equivalent to a REST JSON payload in terms of the operation of a receiving application. Both are a chunk of structured information, your application parses it and operates on the resulting data structures.


>But I can't see any angle where one can credibly assert that a SOAP XML payload isn't equivalent to a REST JSON payload in terms of the operation of a receiving application.

I guess the angle is that there was a style of SOAP where the payload was interpreted as a remote procedure call conforming to an interface described in a WSDL document.

So there would be a SOAP library or server infrastructure (BizTalk) on the receiving end that would decode the message and turn it into a function call in some programming language.

In that sense, the payload would be "data" only for the infrastructure code but not on the application level.

And now I'm going to have to spend the rest of the day trying to forget this again :)


SOAP is the 9th circle of hell.

Most implementations don't retrieve parameters by tag, they retrieve parameters by order, even if that means the tags don't match at all. This is completely unlike JSON.

Also, almost nobody uses REST, so stop calling things REST, when you are talking about HTTP APIs.


Sorry about getting sidetracked there by my SOAP rant - I completely agree with your point.


>When I first encountered RESTful web services using JSON the ability to easily invoke them using curl was such a relief... (and yes, like lots of people, I went through a phase about being dogmatic about what REST actually is, HATEOAS and all the rest - but I've got over that years ago).

I'm not dogmatic about this. People don't understand what REST is. REST is some completely useless technology that absolutely nobody needs. Using the right words for things isn't dogmatism, it's being truthful. The backlash from most people comes from Fielding's superiority complex where he presents REST as superior, when it is merely different, different in ways that aren't actually useful to most people, and yet people constantly give this guy a platform by referring to his obsolete architecture, earning them the "well actually"s they deserve.


Yeah, a leaky abstraction with abstraction inversion on top of it! So within the actual payload there was a method identifier so you had sub-resource routing on top of the URL routing just so you could have middleware handling things on the payload instead of in the headers... So you had an application protocol (SOAP) on top of an application protocol (HTTP).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: