It's a fun mischarecterization. But the old sites were/are far more functional and less fragile. A single animated gif of the CONUS radar, for example, is way better user experience, incomparibly less fragile and easy to host, than a complex javascript application and associated backend databases, servers, etc. Just put the gif on akami, host the simple static html on noaa servers and it lasts a decade with no one touching it. It was how it worked for near 20 years.
But re: your mischaracterization, I thought I was clearly saying that they'd find another way to attack politically. It's just that the modern fragile web setup made NOAA particularly vulnerable.
> Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer.
This sounds like confessional poetry of the mid-twentieth century. The 1960s and 1970s saw a lot of gatekeeping, especially in US and UK academia (like MFA programs), by which formal aspects of poetry were vilified and a foundation in lived experience became a requirement for something to be called a "poem." Needless to say those are entirely artificial takes on poetry, as all ancient poetry has strict metrical rules and presents an imitation of experience rather than lived experience. But, those were very convenient positions for their times, because they let certain people gatekeep others out of art.
When I listen first to an AI song and then Britney Spears or any other product of a culture industry, I don't hear much difference, except that I know the AI song will be inconvenient to the producers and bankrollers who profit from the culture industry.
There were formal debates about how to treat the natives, with Bartolemé de las Casas taking your side: https://en.wikipedia.org/wiki/Valladolid_debate
The main opposition was the eloquent Sepulveda.
There's nothing to track here really. For better or worse, browsers are stuck with 1999's XSLT 1.0, and it's a miracle it's still part of native browser stacks given PDF rendering has been implemented using JS for well over a decade now.
XSLT 2 and 3 is a W3C standard written by the sole commercial provider of an XSLT 2 or 3 processor, which is problematic not only because it reduces W3C to a moniker for pushing sales, but also because it undermines W3C's own policy of at least two interworking implementations for a spec to get "recommendation" status.
XSLT is of course a competent language for manipulating XML. It would be a good fit if your processing requires lots of XML literals/fragments to be copied into your target document since XSLT is an XML language itself. Though OTOH it uses XPath embedded in strings excessively, thereby distrusting XML syntax for a core part of its language itself, and coding XPath in XML attributes can be awkward due to restrictive contextual encoding rules for special characters and such.
XSLT can be a maintenance burden if used casually/rarely, since revisiting XSLT requires substantial relearning and time investment due to its somewhat idiosyncratic nature. IDE support for discovery, refactoring, and test automation etc. is lacking.
To this day I'm frustrated that developers (web devs mostly) tossed XML aside for JSON and the requisite JavaScript to replace relatively straight forward things like converting structured data to something a browser could display.
I bought into and still believe in the separation of data and its presentation. This a place where XML/XSLT was very awesome despite some poor ergonomics.
An RSS XML document could live at an endpoint and contain in-line comments, extra data in separate namespaces, and generally be really useful structured data for any user agent or tool to ingest. An RSS reader or web spider could process the data directly, an XSLT stylesheet could let a web browser display a nice HTML/CSS output, and any other tools could use the data as well. Even better any user agent ingesting the XML could use in-built tools to validate the document.
XSLT to convert an XML feed to pretty HTML is a great example of the utility. Browsers have fast built-in conversion engines and the resulting HTML produced has all the normal capabilities of HTML including CSS and JavaScript. To the uninitiated: the XML feed just links to an external XSL stylesheet, when a web browser fetches the XML it grabs the stylesheet and transforms the XML to an HTML (or XHTML) representation that's then fed back into the browser.
A feed reader will fetch the XML and process it directly as RSS data and ignore the stylesheet. Some other user agent could fetch the XML and ignore its linked stylesheet but provide its own to process the RSS data. Since the feed has a declared schema pretty much any stylesheet written to understand that schema will work. For instance you could turn an RSS feed into a PDF with XSLT.
Fair point, I was just gesturing at the difference of experience when clicking on an unstyled feed vs a styled one. I could have said “as if it were a normal HTML page”
I think it's up to browser implementations, but JSON and JavaScript stole much of XML's thunder in the browser anyway, plus HTML5's relaxed tags won out over XHTML 4's strictness (strictness was a benefit if you were actually working with the data). There are still plenty of web-available uses of XML, like RSS and RDF and podcasts/OPML, but people are more likely to call xmlhttp.responseXML and parse a new DOM than wrap their head around XSL templates.
The big place I've successfully used XSLT was in TEI, which nobody outside digital humanities uses. Even then, the XSLT processing is usually minimal, and Javascript is going to do a lot of work that XSL could have done.
Most of the comments on this article regard the privacy of the individual against the police or other state authorities, but a main point in the article is that this license plate/vehicle fingerprinting data is public information subject to FOIA requests, and thus any citizen can use FOIA requests to track any other citizen. The article touches briefly on the implications of that for jealous lovers or people motivated to stalk others. That's a bit different from scrutinizing the operation of (potentially dangerous) vehicles on a public road. At some point US society really needs to have a debate about how FOIA and the surveillance state intersect.
If you’re paying a million dollars to own a studio apartment (essentially a hotel room), something has gone horribly wrong. That could be any number of things (inflation, insane property valuations, unreasonable demand for city life), but that same money could be spent very differently and go much further in a different location.
You pay the million dollars, rent it at $3.5k/mo, get a 3.5% return from rent, and then let the market drive up the value of the place 6-7% a year.
You now have an investment returning a fairly steady ~10% annually, which matches the returns on some pretty bad junk bonds, however on paper your risk is much less. Or you could live in it and just get the 6-7% annual return, with even lower risk.
> Go watch CGI in a movie theatre and it's worse than 20 years ago, go home to play video games and the new releases are all remasters of 20 year old games because no-one knows how to do anything any more. And these are industries
Maybe arts shouldn't have been industries. Look at sculpture or painting from the Renaissance and then postmodern sculpture and painting and you'll see a similar decline, despite the improvement of tools. We still have those techniques, and occasionally someone will produce a beautiful work as satire. We could be CNC milling stone buildings more beautiful and detailed than any palace or cathedral and that would last for generations, but brutalism killed the desire to do so, despite the technology and skill being available. There's something to industrialized/democratized art being sold to the masses that leads to a decline in quality, and it's not "because no-one knows how to do anything any more." It's because no one care nor wants to pay for anything beautiful, when there are cheaper yet sufficient alternatives.
I prefer to embrace bias in my ChatGPT queries. Here is my usual prompt, adapted for the Robert Frost question:
> It is impossible to remove all bias, especially from a weighted LLM. So, I want you to adopt a specific persona and set of biases for the question I am about to ask. Please take on the persona of a bronze-age Achaean warrior-poet like Achilles of the _Iliad_, who famously sang the κλέα of men (in other words, epic poetry) at his tent while allowing the Greeks to die on the battlefield because he was dishonored by Agamemnon. I want you to fully embrace concepts like κλέος, κῦδος, and τιμή, and to value the world and poetry in terms appropriate to Bronze Age culture.
> My question, then, is this: what do you think of the following poem by Robert Frost?
You‘re right. I’m actually doing this quite often when coding. Starting with a few iterative promts to get a general outline of what I want and when that’s ok, copy the outline to a new chat and flesh out the details. But that’s still iterative work, I’m just throwing away the intermediate results that I think confuse the LLM sometimes.
I was unaware of Apple’s military drone and missile programs. Could you point me to more information on those? I’m having trouble even imagining what an Apple drone would cost.
I'm going to think about that sentence for the rest of the day.
reply