I think DC has the start of a really good system here. There's universal Pre-K 3 and 4. Most elementary schools offer it, but you can also get a large subsidy to go towards a private daycare. I'd love to see that expanded to all ages. Day cares here are super heavily regulated (and thus expensive) and apparently the paperwork is a nightmare for the day care, but in practice it's super easy for parents.
I see all these comments in the vein of 'why should you force people to work in the mines and not get to love their child' and I wonder if any of these people have ever had toddlers. I love my kid, and love spending time with her. But she really likes daycare (and now school). Not only does she get better socialization than me taking her to the park for 2 hours, but she learns skills that I wouldn't be consistent about teaching. It turns out, being taught by people who have years of practice and degrees in childcare is a pretty good idea!
We did Prek-4 at our public school and you could immediately tell the difference between the daycare kids, the nanny kids, and the home-parent kids. The daycare kids were much more prepared and able to cope, and this is at a school where parental involvement was quite high. I don't think the different approaches are universally better or worse, but it's clear to me that the quality of the daycare and the parent matters a lot more than which one you choose.
Aside: Sandvik Coromant is a major industrial conglomerate with billions in revenue. Their cutting tools almost certainly made some things you use daily. Not that it has any bearing on them having a tld, but they’re not some random local metal shop, heh.
I really vehemently disagree with the 'feedforward, tolerance, feedback' pattern.
Protocols and standards like HTML built around "be liberal with what you accept" have turned out to be a real nightmare. Best-guessing the intent of your caller is a path to subtle bugs and behavior that's difficult to reason about.
If the LLM isn't doing a good job calling your api, then make the LLM get smarter or rebuild the api, don't make the API looser.
I'm not sure it's possible to have a technology that's user-facing with multiple competing implementations, and not also, in some way, "liberal in what it accepts".
Back when XHTML was somewhat hype and there were sites which actually used it, I recall being met with a big fat "XML parse error" page on occasion. If XHTML really took off (as in a significant majority of web pages were XHTML), those XML parse error pages would become way more common, simply because developers sometimes write bugs and many websites are server-generated with dynamic content. I'm 100% convinced that some browser would decide to implement special rules in their XML parser to try to recover from errors. And then, that browser would have a significant advantage in the market; users would start to notice, "sites which give me an XML Parse Error in Firefox work well in Chrome, so I'll switch to Chrome". And there you have the exact same problem as HTML, even though the standard itself is strict.
The magical thing of HTML is that they managed to make a standard, HTML 5, which incorporates most of the special case rules as implemented by browsers. As such, all browsers would be lenient, but they'd all be lenient in the same way. A strict standard which mandates e.g "the document MUST be valid XML" results in implementations which are lenient, but they're lenient in different ways.
HTML should arguably have been specified to be lenient from the start. Making a lenient standard from scratch is probably easier than trying to standardize commonalities between many differently-lenient implementations of a strict standard like what HTML had to do.
Are you aware of HTML 5? Fun fact about it: there's zero leniency in it. Instead, it specifies a precise semantics (in terms of parse tree) for every byte sequence. Your parser either produces correct output or is wrong. This is the logical end point of being lenient in what you accept - eventually you just standardize everything so there is no room for an implementation to differ on.
The only difference between that and not being lenient in the first place is a whole lot more complex logic in the specification.
History has gone the way it went & we have HTML now, there's not much point harking back, but I still find it very odd that people today - with the wisdom of foresight - believe that the world opting for HTML & abandoning XHTML was the sensible choice. It seems odd to me that it's not seen as one of those "worse winning out" stories in the history of technology, like betamax.
The main argument about XHTML not being "lenient" always centred around client UX of error display - Chrome even went on to actually implement a user-friendly partial-parse/partial-render handling of XHTML files that literally solved everyone's complaints via UI design without any spec changes but by this stage it was already too late.
The whole story of why we went with HTML is somewhat hilarious: 1 guy wrote an ill informed blog post bitching about XHTML, generated a lot of hype, made zero concrete proposals to solve its problems, & then somehow convinced major browser makers (his current & former employers) to form an undemocratic rival group to the W3C, in which he was appointed dictator. An absolutely bizarre story for the ages, I do wish it was documented better but alas most of the resources around it were random dev blogs that link rotted.
Is that really the story? I think it was more like "backward compatible solution soon about more pure, theoretically better solution"
There's enormous non-xhtml legacy than nobody wanted to port. And tooling back in the day didn't make it easy to write correct xhtml.
Also like it or not, HTML is still written by humans sometimes, and they don't like parser blowing up because of a minor problem. Especially since such problems are often detected late, and a page which displays slightly wrong is much better outcome than the page blowing up.
More or less?
https://en.m.wikipedia.org/wiki/WHATWG is fairly neutral. As someone in userland at the time on the other side of it, it was all a bit nuts.
IE we got new standards invented out of thin air - https://developer.mozilla.org/en-US/docs/Web/HTML/Guides/Mic... - which ignored what hundreds had worked on before, which seemed to be driven by one person controlling the "standard" making it up as they went along.
Microformats and RDFa were the more widely adopted solutions at the time, had a lot of design and thought put into them, worked with HTML4 (but thrived if used with xhtml), etc etc.
JSON-LD/schema.org has now filled the niche and arguably it's a lot better for devs, but imagine how much better the "AI web UX" would be now if we'd just standardised earlier on one and stuck with it for those years?
This is the main area where I saw the behaviour on display, where I interacted most. So the original comment feels absolutely in line with my recollections.
I love bits of HTML5, but the way it congealed into reality isn't one of them.
> There's enormous non-xhtml legacy than nobody wanted to port.
This is a fair argument if content types were being enforced but XML parsing was opt-in (for precisely this reason).
> And tooling back in the day didn't make it easy to write correct xhtml.
True. And instead of developing such tooling, we decided to boil the ocean to get to the point where tooling today doesn't make it any easier to lint / verify / validate your HTML. Mainly because writing such tooling to a non-strict target like HTML is a million times harder than to a target with strict syntax.
A nice ideal would've been IDEs & CI with strict XHTML parsers & clients with fallbacks (e.g. what Chromium eventually implemented)
> I recall being met with a big fat "XML parse error" page on occasion. If XHTML really took off (as in a significant majority of web pages were XHTML), those XML parse error pages would become way more common
Except JSX is being used now all over the place and JSX is basically the return of XHTML! JSX is an XML schema with inline JavaScript.
The difference now days is all in the tooling. It is either precompiled (so the devs see the error) or generated on the backend by a proper library and not someone YOLOing PHP to super glue strings together, as per how dynamic pages were generated in the glory days of XHTML.
We basically got full circle back to XHTML, but with a lot more complications and a worse user experience!
Not directly as strings of course, but a for loop that outputs a bunch of JSX components based on the array return values from a DB fetch is dynamically generated JSX.
No, it's not. The JSX, as in the text in the source file, is static. You can't accidentally forget to escape a string from the database and therefore end up with invalid JSX syntax, like you can when dynamically generating HTML. You're dynamically generating shadow DOM nodes, but the JSX is static.
You are saying the exact same thing I am, just with different words.
JSX makes it impossible to crap out invalid HTML because it is a library + toolchain (+ entire ecosystem) that keeps it from happening. JSX is always checked for validity before it gets close to the user, so the most irritating failure case of XHTML just doesn't happen.
XHTML never had they benefit. My point is that if libraries like React or Vue had been popular at the time of XHTML, then XHTML's strictness wouldn't have been an issue because JSX always generated valid outputs (well ignoring compiler bugs which I far too damn many of early on in React's life)
> If XHTML really took off (as in a significant majority of web pages were XHTML), those XML parse error pages would become way more common
This is not true because you are imagining a world with strict parsing but where people are still acting as though they have lax parsing. In reality, strict parsing changes the incentives and thus people’s behaviour.
This is really easy to demonstrate: we already have a world with strict parsing for everything else. If you make syntax error with JSON, it stops dead. How often is it that you run into a website that fails to load because there is a syntax error in JSON? It’s super rare, right? Why is that? It’s because syntax errors are fatal errors. This means that when developing the site, if the developer makes a syntax error in JSON, they are confronted with it immediately. It won’t even load in their development environment. They can’t run the code and the new change can’t be worked on until the syntax error is resolved, so they do that.
In your hypothetical world, they are making that syntax error… and just deploying it anyway. This makes no sense. You changed the initial condition, but you failed to account for everything that changes downstream of that. If syntax errors are fatal errors, you would expect to see far, far fewer syntax errors because it would be way more difficult for a bug like that to be put into production.
We have strict syntax almost everywhere. How often do you see a Python syntax error in the backend code? How often do you run across an SVG that fails to load because of a syntax error? HTML is the odd one out here, and it’s very clear that Postel was wrong:
> This is not true because you are imagining a world with strict parsing but where people are still acting as though they have lax parsing. In reality, strict parsing changes the incentives and thus people’s behaviour.
Dude I lived in that world. A fair amount of developers explicitly opted into strict parsing rules by choosing to serve XHTML. And yet, those developers who opted into strict parsing messed up their XML generation frequently enough that I, as an end user, was presented with that "XML Parse Error" page on occasion. I don't understand why you'd think all developers would stop messing up if only strict parsing was hoisted upon everyone rather than only those who explicitly opt in.
> In your hypothetical world, they are making that syntax error… and just deploying it anyway.
No, they're not. In my (non-hypothetical, actually experienced in real life) world of somewhat wide-spread XHTML, I'm assuming that developers would make sites which appeared to work with their test content, but would produce invalid XML in certain situations with some combination of dynamic content or other conditions. Forgetting to escape user content is the obvious case, but there are many ways to screw up HTML/XHTML generation in ways which appear to work during testing.
> We have strict syntax almost everywhere. How often do you see a Python syntax error in the backend code?
Never, but people don't dynamically generate their Python back-end code based on user content.
> How often do you run across an SVG that fails to load because of a syntax error?
Never, but people don't typically dynamically generate their SVGs based on user content. Almost all SVGs out there are served as static assets.
> Dude I lived in that world. A fair amount of developers explicitly opted into strict parsing rules by choosing to serve XHTML.
No they didn’t, unless you and I have wildly different definitions of “a fair amount”. The developers who did that were an extreme minority because Internet Explorer, which had >90% market share, didn’t support application/xhtml+xml. It was a curiosity, not something people actually did in non-negligible numbers.
And you’re repeating the mistake I explicitly called out. Opting into XHTML parsing does not transport you to a world in which the rest of the world is acting as if you are in a strict parsing world. If you are writing, say, PHP, then that language was still designed for a world with lax HTML parsing no matter how you serve your XHTML. There is far more to the world than just your code and the browser. A world designed for lax parsing is going to be very different to a world designed for strict parsing up and down the stack, not just your code and the browser.
> I'm assuming that developers would make sites which appeared to work with their test content, but would produce invalid XML in certain situations with some combination of dynamic content or other conditions. Forgetting to escape user content is the obvious case, but there are many ways to screw up HTML/XHTML generation in ways which appear to work during testing.
Again, you are still making the same mistake of forgetting to consider the second-order effects.
In a world where parsing is strict, a toolchain that produces malformed syntax has a show-stopping bug and would not be considered reliable enough to use. The only reason those kinds of bugs are tolerated is because parsing is lax. Where is all the JSON-generating code that fails to escape values properly? It is super rare because those kinds of problems aren’t tolerated because JSON has strict parsing.
> No they didn’t, unless you and I have wildly different definitions of “a fair amount”. The developers who did that were an extreme minority because Internet Explorer, which had >90% market share, didn’t support application/xhtml+xml. It was a curiosity, not something people actually did in non-negligible numbers.
Despite being an extreme minority of strict parsing enthusiasts who decided to explicitly opt into strict parsing, they still messed up enough for me to occasionally have encountered "XML Parse Error" pages. You'd think that if anyone managed to correctly generate strict XHTML, it'd be those people.
> You'd think that if anyone managed to correctly generate strict XHTML, it'd be those people.
Once more, they were operating in a world designed for lax parsing. Even if their direct choices were for strict parsing, everything surrounding them was lax.
Somebody making the choice to enable strict parsing in a world designed for lax parsing is a fundamentally different scenario than “If XHTML really took off (as in a significant majority of web pages were XHTML)”, where the entire technology stack from top to bottom would be built assuming strict parsing.
> Never, but people don't dynamically generate their Python back-end code based on user content.
Perhaps not much in the past - but I suspect with Agentic systems a lot more in the future - are you suggesting relaxing the Python syntax to make it easier for auto-generated code to 'run'?
There's a difference between static chatbot-generated code that you commit to your VCS, and dynamic code generated based on user content. I'm not talking about the former case.
> I'm not talking about chatbot generated code that's committed to your VCS.
I'm talking about code that's dynamically generated in response to user input in order to perform the task the user specified. This is happening today in agentic systems.
Should we relax python syntax checking to work around the poorly generated code?
oh yes so true, but I would generalize it to "to flexible"
- content type sniffing spawned a whole class of attacks, and should have been unnecessary
- a ton of historic security issues where related to html parsing being too flexible, or some JS parts being to flexible (e.g. Array prototype override)
- or login flows being too flexible creating a easy to overlook way to bypass (part of) login checks
- or look at the mess OAuth2/OIDC had been for years because they insisted to over-enginer it and how especially it being liberal about quite many parts lead to more then one or two big security incidents
- (more then strictly needed) cipher flexibility is by now widely accepted to have been an anti pattern
- or how so much theoretically okay but "old" security tech is such a pain to use because it was made to be supper tolerant to everything, like every use case imaginable, every combination of parameters, every kind of partial uninterpretable parts (I'm looking at you ASN.1, X509 certs and many old CA software, theoretically really not bad designed, practically such a pain).
And sure you also can be too strict, high cipher flexibility being an anti-pattern was incorporated into TLS 1.3. But TLS still needs some cipher flexibility, so they fund a compromise of (oversimplified) you can choose 1 of 5 cipher suites but can't change any parameter of that suites.
Just today I read an article (at work, I don't have the link at hand) about some so hypothetical but practically probably doable (with a bunch of more work) scenarios to trick very flexible multi step agents into leaking your secrets. The core approach was that they found a way to have a relative small snippet of text which if it end up in the context has a high chance to basically override the whole context with just your instruction (quite a bit oversimplified). In turn if you can sneak it into someones queries (e.g. you GTP model is allowed to read you mails and it's in a mail send to you) you can then trick the multi step model to grab a secret from your computer (because the agents often run with user permissions) and send it to you (by e.g. instrumenting the agent to scan a website under an url which happens to now contain the secret).
Its a bit hypothetical, its hard to pull of, but it's very well in the realm of possibility due to how content and instructions are on a very fundamental level not cleanly separated (I mean AI vendors do try, but so far that never worked reliable it's in the end all the same input).
HTML being lenient is what made progressive enhancement possible -- right down the original <img> tag. The web would not have existed at all if HTML was strict right from the start.
no not at all extensible isn't the same as lenient
having a Content-Type header where you can put in new media types (e.g. for images) once browsers support it is extensibility
sniffing the media type instead of strictly relying on the Content-Type header is leniency and had been the source of a lot of security vulnerabilities over the years
or having new top level JS object exposing new APIs is extensibility but allowing overriding the prototypes of fundamental JS objects (i.e. Array.prototype) turned out to be a terrible idea associated with multiple security issues (like idk. ~10 years ago, hence why it now is read only)
same for SAML, its use of XML made it extensible, but they way it leniently encoded XML for signing happened to be a security nightmare
or OAuth2 which is very extensible, but it being too lenient in what you can combine how was the source of many early security incidents and is still source of incompatibilities today (but OAuth2 is anyway a mess)
> no not at all extensible isn't the same as lenient
I never said it was. But lenient provides for extensibility that isn't planned for. The entire evolution of the web is based on that. Standards that were too strict or too inflexible have been long forgotten by history.
That's not to say that isn't the source of security vulnerabilities and bugs but that doesn't negate the point.
web is mostly based on standards which always have been supper lenient, messy und had massive gaps of unclearity mainly because it was more "lets somehow standarize what browsers already do" then anything else
through I guess you could say that the degree to which you can polyfill JS is more lenient then many thing is good and that did help with extensibility,
You say that's not true but then you don't contradict what I'm saying. How were browsers able to do anything, while maintaining backwards compatibility, that needed standardizing later?
That's poor reasoning. The web now counts as strict but still extensible: you just have to clearly define how to handle unknown input. The web treats all unknowns as a div.
For a simple model, let's say you hire programmers for three reasons:
1. Because you have (X) work to get done to run your business. Once that work is done, there is no more work to do.
2. Because they get work done that makes more money than you pay them, but with diminishing marginal returns. So the first programmer is worth 20x their salary, the 20th is worth 1.01x their salary.
3. Because you have some new idea to build, and have enough capital to gamble on it. If it succeeds before you run out of money, you'll revert to (1) or (2).
Let's assume AI comes along that means a programmer can do 4x the work. If most programmers are in the first bucket, then you only need 1/4 as many programmers and most will be fired.
If most programmers are in the second bucket, then suddenly there's _much_ more stuff that can be built (and money made) per-programmer. So businesses will be incentivized to hire many more programmers.
For programmers in the third bucket, our AI makes more likely to get built in time, thus ups the odds of success a little.
How you think the market is structured decides how you think AI will effect job creation and destruction.
> So the first programmer is worth 20x their salary, the 20th is worth 1.01x their salary.
> If most programmers are in the second bucket, then suddenly there's _much_ more stuff that can be built (and money made) per-programmer. So businesses will be incentivized to hire many more programmers.
How do these two reconcile? If hiring more programmers is diminishing returns, why would a business hire more?
Because you've moved the frontier of who's making you money out. Now the 20th programmer is still worth 5x their salary, and you hire up to the 30th programmer to hit 1.01.
Depends where you are. it is known in the US, and its popular with people from the rest of the UK.
It does not seem to be much known in Asia, apart from as the source of whiskey.
I do not about the rest of Europe, but my feeling is that it is not well known.
I have been quite surprised how many people (from Asia and Europe) can visit, or even live in, the UK and not go out of London.
While Scotland is not unknown, there are certainly a lot of people who might visit who have a low awareness of what is there, and articles like this show some very attractive aspects of Scotland.
Keep it that way. The last thing you want is the locust plague turning your beautiful countryside into a theme park where the locals can no longer live.
If not a theme park or safari, then at least a movie:
>A petition to to bring back the experience has been launched, while Scottish actress Karen Gillan has said she wants to star in a film adaptation of the event.
This is already something of a problem in places due to AirBnB, especially on the islands.
Mind you, I live in Edinburgh and the Festival has arrived, so I have an extra 100k people to walk past or through everytime I want to get anywhere this month.
Weirdly Scotland is more popular with European tourists than you might think. Until recently I lived in a relatively tourist heavy part of the country, near Loch Lomond, and every summer we'd get a lot of cars from European countries on the roads.
I suspect that outside of maybe one main destination city (Edinburgh--maybe two with Glasgow), Scotland probably feels somewhat hard to get around to someone from a very different culture and they may not be wrong.
We send our daughter to a daycare that has a number of families so wealthy that one or both parents wouldn’t have to work. They still do daycare because many people want careers, and/or because they think the socialization and environment diversity is good for their kids.
The fundamental question is "will the LLM get better before your vibecoded codebase becomes unmaintainable, or you need a feature that is beyond the LLM's ceiling". It's an interesting race.
I like to work to music, but find anything with words distracting. So I listen to a lot of random youtube playlists of lo-fi/chillout stuff, which has just been swamped with AI lately.
I don't think this will be a big deal in a few years once it's as good as human musicians, but right now it's super annoying because the music is always subpar and it's hard to separate out the genai on youtube.
It's not about the quality, it's about the lack of real human intent underlying it and the fact that AI inevitably regresses to the mean. AI will never(or at least not anytime soon) replicate that, it'll always be fast junk food for our brains.
I see all these comments in the vein of 'why should you force people to work in the mines and not get to love their child' and I wonder if any of these people have ever had toddlers. I love my kid, and love spending time with her. But she really likes daycare (and now school). Not only does she get better socialization than me taking her to the park for 2 hours, but she learns skills that I wouldn't be consistent about teaching. It turns out, being taught by people who have years of practice and degrees in childcare is a pretty good idea!
We did Prek-4 at our public school and you could immediately tell the difference between the daycare kids, the nanny kids, and the home-parent kids. The daycare kids were much more prepared and able to cope, and this is at a school where parental involvement was quite high. I don't think the different approaches are universally better or worse, but it's clear to me that the quality of the daycare and the parent matters a lot more than which one you choose.
reply