Hacker Newsnew | past | comments | ask | show | jobs | submit | caleblloyd's commentslogin

Flash removal broke multiple government sites. I couldn't take a required training course for a few months after flash support was removed and the site was taken offline for an upgrade.

I’m sure ActiveX and Silverlight removal did too. And iframes not sharing cross domain cookies. And HTTP mixed content warnings. I get it, some of these are not web specs, but some were much more popular than XSLT is now.

The government will do what they do best, hire a contractor to update the site to something more modern. Where it will sit unchanged until that spec too is removed, some years from now.


Flash was never a web standard. XLST is.

What's the practical different to users and site maintainers?

Flash was dependent on a proprietary plugin from a single vendor. XSLT styled documents are compatible out of the box in any web browser from multiple competing vendors, even old Internet Explorer.

The iPhone never supported Flash. But thanks to web standards it supports viewing RSS feeds and other weird XML/XSLT artifacts from the past to this day.


Maybe I'm missing something here, but can't XSLT be processed server side instead of browser side?

It seems like a very easy fix for the handful of websites that still use it.


XSLT is often used on low-power IOT devices which don't have the resources to render server-side

What are those low-power devices (can you identify any?) doing with XSLT, then? If they don't have the power to do the transformation, it seems pointless for them to possess the template needed to perform the process.

that's why they use XSLT. the whole point is that rendering happens in on the client.

you can find discussion in the several other recent XSLT threads


RSS/Atom feeds can use them. How does it make sense to maintain two versions of the same data on the server?

Exactly. The Atom feed of my website declares an XSLT stylesheet which transforms it to HTML. That way it can be served directly to, and renders prettily by, a web browser (see https://paul.fragara.com/feed.xml). For the curious, the XLST can be found here: https://gitlab.com/PaulCapron/paul.fragara.com/-/blob/master...

Btw, you can also apply an XSLT sheet to an XML document using standard JavaScript: https://developer.mozilla.org/en-US/docs/Web/API/XSLTProcess...


There would be no reason to fix this if the chrome people had kept up their end of the bargain by supporting the standard. We can quibble as to whether or not XSLT should have been part of the standard to begin with but it IS part of the standard.

Google says it's "too difficult" and "resource intensive" to maintain...but they've deliberately left that part of the browser to rot instead of incrementally upgrading it to a modern XSLT standard as new revisions were released so it seems like a problem of their own making.

Given their penchant for user-hostile decisions it's hard to give the chrome team the benefit of the doubt here that this is being done purely for maintainability and for better security (especially given their proposal of just offloading it to a js polyfill).


Commercial enterprises can only support standards if it's commercially viable.

It's commercially beneficial to make the web standard so complex that it's more or less impossible to implement, since it lets you monopolise the browser market. However complexity only protects incumbents if you can persuade enough people to use the overcomplicated bits. If hardly anyone uses it, like xslt, then it's a cost for the incumbent which new entrants might get away without paying. So there's no real upside for Google in supporting it. And you can't expect commercial enterprises to do something without any upside.


I expect commercial enterprises not to be allowed to engage in anti-competitive and consumer-hostile behavior. Like it or not and regardless of their contributions to tech/the web Google is notorious for pulling the rug out from under open industry standards only to replace them with their own proprietary or, as you described, "standards" that are so complex it's more or less impossible to implement so you're "forced" to use/buy their product.

They will be as anti-competitive and as consumer hostile as they can get away with. Adding and removing features from the standard is so ambiguously motivated that I almost can't imagine them being successfully prosecuted for it. In a way it's pretty clever.

Nobody is going to do things you agree with all the time. That doesn't mean everything they do should be condemned by default, without thorough investigation into their motives.

There are no easy fixes for government sites.

I don’t quite understand the part of the article that deems that you can skip all the checks under the assumption that this is an older browser, and that there is no CSRF vulnerability.

The algorithm seems sane for modern browsers. But you could probably find an outdated browser - older Android device WebView would be common -where the whole thing breaks down.

So I think tokens can be a thing of the past for modern browsers. I like the middleware, I hope it does show up in ASP.NET proper soon. My guess is they’ll keep tokens middleware around alongside it for some time once it does though, and the decision on which to use will come down to whether or not you want to make sure older browsers are secure.


I am the Product/Eng Lead and a Co-founder of a company formed ~1 year ago building AI-native developer tooling for Platform Engineers. Have been able to iterate very quickly through PoC phases and get initial feedback on ideas quicker. For features that make it into production code, we do have to spend some time re-working them with more formal architectures to remove "AI slop" but we are also able to try more things out to figure out what to move forward, so I feel like it is a net gain.

Part of "AI-native" means being able to really focus on how we can improve our Product to lessen upfront burden on users and increase time-to-value. For the first time in a while, I feel like there is more skill needed in building an app than just doing MVC + REST + Validation + Form Building. We focus on the minimum data needed for each form upfront from our users, then stream things like Titles, Icons, Descriptions, etc in a progressive manner to reduce form filling burden on our users.

I've been able to hire and mentor Engineers at a quicker pace than in the past. We have a mix of newer and seasoned Engineers. The newer Engineers seem to be learning far quicker with focused mentoring on how to effectively prompt AI for code discovery, scaffolding, and writing tests. Seasoned Engineers are able to work across the stack to understand and contribute to dependencies outside of their main focus because it's easier to understand the codebase and work across languages/frameworks.

AI in development has proven useful for some things, but thoughtful architecture with skilled personnel driving always seems to get the best results. Our vision from our product is the same, we want it to be a force multiplier for skilled Platform Engineers.


We may just get this, along with a $7.25 per hour base wage!


It's $2.13 for tipped employees.


Not in all states. California does not have a lower tipped minimum wage. It's at least $16 here last I checked (except $20 for fast food because "reasons")


Ah yes; for a bit my wife was making less as a preschool teacher than the minimum wage at McDonalds. I understand it caused a bit of turnover at the local public schools, since cafeteria workers and aides were making less than $20/hr in 2024 as well (I don't know if they still are).


Reminds me of when I was visiting family a few years ago in Kentucky. I kept seeing tons of ads everywhere about hiring for plumbers, and some warehouse roles.

The listed salaries were not that far off from what even the local McDonald’s was paying


There was a (at least local) McDonald's inversion just after Covid; they were paying $20/hr which was competitive or better than the local factories.

It's one of those "reset" things you need to do now and then, because it's really easy for an industry like CNA or similar to end up paying less than the gas station for more annoying work.


So it does make great sense for a CNA or a preschool teacher to be quite a bit more highly paid than a gas station attendant or fry cook, due to the much higher responsibility level and like you said the annoyance.

However, I don't think anything that's happened in the last 5 years has helped that. If anything, the inflation has cost everyone dearly, but if I put 20% of my income into stocks I am less impacted than poor people who put 100% of their income into goods and services whose prices have gone up as a result of everyone's wages.


The 2.13 tipped wage is a great way to know if you're in a "shithole" state or not. Only shithole states keep that.


Same goes for the regular 7.25 federal minimum.


let’s not get too lofty, the federal base tipped wage is $2.13


gotta charge less than the hourly cost of a B200 to remain employed


The reason we are not seeing this in mainstream software may also be due to cost. Paying for tokens on every interaction means paying to use the app. Upfront development may actually be cheaper, but the incremental cost per interaction could cost much more in the long term, especially if the software is used frequently and has a long lifetime.

As the cost of tokens goes down, or commodity hardware can handle running models capable of driving these interactions, we may start to see these UIs emerge.


Oh yeah, I was 100% thinking in terms of local models.


Been using gRPC with json transcoding to REST on a greenfield project. All auto generated clients across 3 languages. Added frontend wrapper to pre-flight auth requests so it can dynamically display what users are allowed to do.

Claude Code has been an absolute beast when I tell it to study examples of existing APIs and create new ones, ignoring bringing any generated code into context.


I agree. My experience is that regularly scheduled 1:1s without an agenda seem to turn into therapy sessions for a surprising amount of people. I like doing ad-hoc 1:1s with specific agendas though, such as pair programming or an architecture session for an issue an Engineer is starting to work on.


Yes. Any language that dynamically links to the OS crypto library (like OpenSSL) is more attractive because your Government customer can install your software on their OS with their FIPS compliant OS crypto library.

This moves the needle for Go but you still need to cut a FIPS version of your software since this crypto is still statically linked. I like this option quite a bit if the Government customers get on board with it.

There are some Go forks maintained by Microsoft and RedHat I believe that do dynamic linking for crypto which requires CGO.


> This moves the needle for Go

To clarify, you could previously link to an external SSL library (typically something like BoringSSL) and people did do that. However, it makes cross compilation a pain compared to native Go.


The same way they handle it when a vandal does it to a random car at a Walmart parking lot. Nothing at all until one day the manufacturers put sentry mode on every car instead of just Teslas.


Amen! NATS is how we do AI streaming! JetStream subject per thread with an ordered consumer on the client.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: