I agree with the sentiment in theory but disagree in practice.
The way I typically look to segment and price things is by billing based on organizational complexity rather than gating end-user features whenever possible. If something is a specific need for a large org, it should be a higher tier, since those organizations typically have a larger ability to pay. If it's something that a single seat user would want if they were an expert, I'd rather not tier on that - it basically would be shitting on your largest segment of superusers / fans / influencers for most B2C apps.
Put a different way, if I were subscribing to MS Paint, I'd rather have to pay more for SAML/SCIM provisioning than to pay for the number of particles the spray paint tool can output at once. One limits orgs, the other limits users. You should never limit users without reason.
> Single sign-on (SSO) is a mechanism for outsourcing the authentication for your website (or other product) to a third party identity provider, such as Google, Okta, Entra ID (Azure AD), PingFederate, etc.
Or the IdP is administered by the enterprise's own IT operation.
The outsourcing of your security to (and also consequently leaking information to) a third party IdP is a fairly new phenomenon in 'security'.
Someone must have paid a lot of money to promote that idea.
Yeah, that's a nice touch. Though the reading experience is also important. The tool is obviously super powerful and a more cluttered UI is hence probably unavoidable.
Just because people use it doesn't mean they want to use it. We're in a bubble here and most people are pretty tech illiterate. Most people don't even know there are other options.
Besides, it also misses a lot. Like how there's a lot of people that use Google Docs. Probably the only alternative an average person is aware of. But in the scientific/academic community nearly everyone uses LaTeX. They might bemoan and complain but there's a reason they're using it over Word and TeX isn't 2.5GB...
Cybersecurity Researcher, Jeremiah Fowler, discovered and reported to Website Planet about an unencrypted and non-password-protected database that contained 957,434 records. The database belongs to an Ohio-based organization that helps individuals obtain physician‑certified medical marijuana cards. The database held PII, drivers licenses, medical records, documents containing SSNs, and other internal potentially sensitive information.
So, the absolute bare minimum was not followed. Just wide open database containing medical information.
Now I'm more confused. An infinitely efficient system would saturate the network. An infinitely inefficient system would saturate the CPU. Your bullet point is valid for an infinitely inefficient system.
The metric that actually matters is efficiency of the task, given a hardware constraint. In this context, that's entirely network throughput (streaming ability/hardware, with hardware being constant, you can just compare streaming ability directly).
For a litmus test of the concept, if you rewrote this in C or Rust, would the CPU bottleneck earlier or later? Would the network throughput go up or down?
Loading one page is probably faster that loading a template and only after that loading the data with the second request, given that the network latency can be pretty high. That's why Google serves (served?) its main page as a single file and not as multiple HTML/CSS/JS files.
Yep, I used to deal with this at $LastJob and amount of support burden was terrible.
Azure AD/Entra ID (Microsoft IDP) was most common and amount of IT folks who don't have a clue about it is staggering.
Companies kicking over issues to us when it's their problem. "Hey, we have a ticket saying MFA Required but account shows as Entra ID." "Send it back with contact their IT team." "Their IT team opened the ticket" rage screaming
Companies not following setup instructions. I used to provide Terraform, Powershell and Graphical setup. I can count on one hand how many people used Terraform/Powershell. This was always dicey because I got familiar with the error messages and would be "Yep, this was not setup right on their end." I had 4 phone calls with $CustomerIT swearing it was setup properly and stopped attending after that. Finally they got someone with a brain to review and finished setup.
Documentation would fall out of date because of some UI change and I'd spend a day reviewing it and updating it.
I would couple the Experiment and the theory together, and treat them both deserving of the prize, but not sure how it works in practice. As for the general technique of ML, sure, it's important but it seems to me that it's a tool that can be used in Physics, and the specific implementation/use-case is the actual thing that's noteworthy, not the general tool. I wouldn't consider a new mathematical theorem by itself to be physics and deserving of a physics prize, I view general ML the same way.
Pretty neat. I use Emacs a lot, and also do quite a bit of video trimming. For people wondering "why Emacs?", here’s the use case: trimming video is mostly about writing down start/end times, sometimes with a note. That’s all text.
If you can turn that text directly into clips without switching to a separate video editor, it’s surprisingly efficient. Of course, this only makes sense if you already live in Emacs, then it reduces context switching, helps to keep the flow. If you don’t, it just looks odd. But it’s not about making a meme out of "doing everything in Emacs" - it is just a small tool that fits the workflow of people who are already in that environment.
I'm an "enterprise user" of Obsidian, but all I use it for is work-related note taking. My company shows up on that page because I get them to pay for my commerical license. Outside of that it isn't an official internal tool. I don't use it to work on projects together with my teammates, for example.
A lot of very old SPA like heavy applications use XSLT. Basically, enterprise web applications (not websites) that predate fetch, rest, and targeted or still target Internet Explorer 5/6.
There was a time where the standard way to build a highly interactive SPA was using SOAP services on the backend combined with iframes on the front end that executed XSLT in the background to update the DOM.
Obviously such an approach is extremely out of date and you won't find it on any websites you use. But, a lot of critical enterprise software was built this way and is kind of stuck like this.
Yes, it would be better to work on alternatives, and I have done some of these things. However, that won't fix WWW (or Chrome or Google), it just means it is an alternative (which is still a good thing to have, though).
> Too many domains which are incorrectly configured leading to non-existing domain errors.
That's an interesting and somewhat surprising data point given the use of DNSSEC validation at public resolvers (e.g., 1.1.1.1, 8.8.8.8, etc.). Might be something that would be useful to track by those following DNSSEC deployment.
For selectively disabling DNSSEC validation, I gather PiHole+dnsmasq doesn't support Reverse Trust Anchors (RTA). Unfortunate.
Besides that this was clearly a security f*ckup, in my mind it's almost equivalent to running those third party liters in our Internet-connection-enabled editors and IDEs. Other than one banking project, I don't think I ever had to sandbox my editor in any way.
Wikipedia has a section in this, which I found interesting:
> The standard pluralised form of octopus in English is octopuses; the Ancient Greek plural ὀκτώποδες, octopodes, has also been used historically. The alternative plural octopi is usually considered etymologically incorrect because it wrongly assumes that octopus is a Latin second-declension -us noun or adjective when, in either Greek or Latin, it is a third-declension noun.
I actually found that particular response to be quite disappointing. It should give pause to those advocating removal of XSLT that these three totally disparate use cases could already be gracefully handled by a single technology which is:
* side effect free (a pure data to data transformation)
* stable, from a spec perspective, for decades
* completely client-side
Isn't this basically an A+ report card for any attempt at making a powerful general tool? The fact that the suggested solution in the absence of XSLT is to toil away at implementing application-specific solutions forever really feels like working toward the wrong direction.
You're talking about one person. All the politicians I've seen are wicked. I said plenty often that they should be replaced with people pf godly character. Until then, working with who is in the race (eg Harris vs Trump), have to vote for nobody by that standard. If voting for policy, we look at policies rather than the politician.
Progressives have been, at an institutional level, censoring the Gospel as hate speech or harassment, mocking God in media, promoting idolatry/universalism, pushing fornication, promoting child murder (abortion) even financially, pushing LGBT even in elementary school, and recently systematic discrimination against entire groups. They also defend Palestine over Israel when they have to pick a side. They also export sexual immorality to other countries via media and political deals which is exactly what Revelation warns about in Rev. 17:2.
The Old Testament shows these same traits... especially idolatry, child murder, and ditching Biblical marriage for perversion... being common threads for the destruction of nations. That Progressives promote these on a policy level, but mock and fight God's design and the Gospel, means we have a clear choice. One party, who is merely pandering, will at least let us share Christ, protect babies from murder, and reverse other damaging trends. Those trends are happening now but didn't under Biden/Harris or Obama.
With server-side rendering you control the amount of compute you are providing, with client-side rendering you cannot control anything and if the app would be dog slow on some devices you can't do anything.
The web has grown a thousand fold over those decades, in spite of no support for XSLT. No browser has failed (or gained market traction) by missing support for (or adding more support for) XSLT. It's an irrelevancy, even if you did like it once.
Lots of content was lost when Flash was removed as well - much, much more than the amount of content that will be lost if XSLT is removed. And yet the web continued.
The way I typically look to segment and price things is by billing based on organizational complexity rather than gating end-user features whenever possible. If something is a specific need for a large org, it should be a higher tier, since those organizations typically have a larger ability to pay. If it's something that a single seat user would want if they were an expert, I'd rather not tier on that - it basically would be shitting on your largest segment of superusers / fans / influencers for most B2C apps.
Put a different way, if I were subscribing to MS Paint, I'd rather have to pay more for SAML/SCIM provisioning than to pay for the number of particles the spray paint tool can output at once. One limits orgs, the other limits users. You should never limit users without reason.