Well, for one, KaTeX doesn't do "LaTeX" but a limited subset of the TeX equation syntax.
As such, it can't handle more complicated macros or typesetting anything apart from equations.
To be fair, the Last-Modified header is very sketchy. I use it as one of the heuristics for determining the age of a website in my search engine. It's not great.
It's frequently found incorrect, both older and younger than the actual age of the document. It's a bit of a relic from back in ye olden days when websites were static .htm files in a folder, which is so rarely the case today.
It doesn't help it's also got overloaded uses via If-Modified-Since -style conditional requests.
Plenty of websites also play with the user-visible dates on websites to game search engines - most dates shown in Google results seem to be complete garbage. I don't think Modified-Since is really worse, and it at least gives you a chance to maybe get a date for static pages.
But you are right that If-Modified-Since forces it to be a date for the complete document rather than the content, which might not be as useful to normal users for dynamic pages.
Yeah, my takeaway after having attempted to do so is that properly dating websites is a very hard problem. You can get Google-level-accuracy decent guesstimates relatively easily, but going beyond that is hard.
Not generally useful to show this by default, because nowadays most pages are dynamically generated and although it's technically easy to implement, the last modified header is typically not set to $now.
Well, if Browsers would show it more prominently, there were more motivation to think about it for developers.
But Browsers are bad HTTP clients. Think about bad user experience with file uploads (no built in progress report!?), HTTP auth (not showing status, no logout etc.)
In Chrome you can F12 and go to "Network" tab and then refresh the page. Choose the first file in the list (that's the HTML itself) and you will find "Response Headers" in the "Headers" panel, which includes Last-Modified. It's a bit deep, which makes sense as it's rarely useful.
Last-Modified can have unwanted negative influence on caching behavior. If you want to expose metadata, there's OpenGraph et al. to do the job properly.
This is not a very good explanation, nor is it useful for anything.
A single qubit is not useful for computation and the way it is written doesn't generalize to more than a single qubit.
The code doesn't even really simulate a single qubit properly, since it doesn't use complex numbers.
What they call a "Rydberg gate" is not a gate at all.
Most of the text feels like it's written by an AI.
Hiiiiii :) GitLab team member and the author of the blog post here.
After reading your comment, we swapped out the photo so that no one else would have such an unsettling experience! I liked the compass photo, so I feel a bit silly that I didn't notice what you did! I'll remember your comment and zoom the next time we pick a stock photo, haha. Thank you!
Could frammish be a portmanteau then? Condensing "frammis-ish" to frammish - as in gizmo-like, maybe the thing you need to build which may or may not be a distinct component?
frammis, frammistat, frob, frobnicate, and a few others are "hacker" words that I learned from downloading text files on bulletin boards in the 80s. i only ever heard them from that source, until i had a manager from that era except he was all military software background. (yeesh, he was awful)
I suppose it's since it doesn't really support LaTeX but instead uses MathJax to render equations.
TeX is a a turing-complete language, picking some random subset of LaTeX commands would feel pretty arbitrary to me. In comparison, supporting full markdown syntax is pretty easy.
I suspect this is probably true. I've been bitten by this before, when journals claim to support LaTeX submissions but their online system breaks utterly if you use the tickz package inline...
To be fair, precisely because of the Turing completeness, it's not in a journal's interest to run whatever totally bizarre code you throw at them—and they do have to run it if they're going to compile your code, which they have to do in order to screw it up to the house style.
Why not? Because it might not terminate in a reasonable time? Being Turing-complete doesn't mean being an attack vector: when reasonably sandboxed, the LaTeX code is still confined to producing an output document, not exfiltrating your secret data or holding you to ransom.
> Because it might not terminate in a reasonable time?
Yes. The publishers are in business for their readers, not their authors, even though those audiences often overlap. If putting a hardship on their authors will help their readers, even in a notional sense—"because of the restrictions we put on TeX documents, the people we need to hire to deal with TeX only have to have basic skills, and so we are better able to concentrate resources elsewhere in the workflow"—then they will. (I'm not defending this choice, just reporting on it. A recent dealing with IMRN made it horrifyingly clear how little their TeXnical staff understands about the most basic TeX.)
> Being Turing-complete doesn't mean being an attack vector: when reasonably sandboxed, the LaTeX code is still confined to producing an output document, not exfiltrating your secret data or holding you to ransom.
Everything is an attack vector, and Turing complete things are even more so. Plenty of people have thought that they have reasonably sandboxed things and found out later that they hadn't. TeX is an incredibly reliable piece of software, and I'm sure great strides have been made in making it also a secure piece of software, but the security features have received much less battle testing and so should be trusted less.
> TeX is a a turing-complete language, picking some random subset of LaTeX commands would feel pretty arbitrary to me.
I'm not sure why Turing completeness makes it easier or harder to choose a subset—perhaps easier, since "make it not Turing complete" should be a guiding principle for anything you compile on behalf of someone else—but MathJax and KaTeX are both pretty good guides for an "acceptable" subset.
This protocol depends on multipartite entanglement, so it does seem quite far away from being practical for anything yet.
If quantum routers were a solved problem and we had a quantum internet (last I checked, we do not), for this we would also need some form of 'coherent multicast routing'.
reply