Yes. Like no matter what someone thinks about the NHS, it's always affordable, and it's entirely inclusive. And if you want private healthcare, you can absolutely get it. I've had private health insurance at every post-university job I've had, it's a standard offering in tech.
I wish I knew how to (ask an AI to) characterize this style. Is it always three examples? Nice formatting and bullet points? Evenly issued, lengthy and didactic explanations?
When I write, I now go out of my way not to use lists or em dashes. I learned before LLMs, that using lists in writing was lazy. Even when I do use an LLM for writing, I tell it to explain everything in paragraph form. I start off with my own outline and then “discuss” what I want to emphasize.
The second thing about LLM generates text especially when doing technical writing, is that anytime it explains something, even with a lot of context of what I’m working on, it always adds lines about “benefits”.
In my use case something like “this provides a secure, adaptable environment…”. I have had to remove wording I had like that on my resume before LLMs and definitely don’t put it in my own writing now.
Ironically enough, out of the four examples ChatGPT generated about “what makes a good leader” in the link above, the example of “AI generated”, is the one I would lean toward in my own writing.
Also, putting AI assisted writing back through a new session with ChatGPT and asking it does it sound AI generated multiple times, and taking its suggestions will make it sound less like it’s AI generated.
When I do write something “thought leader”ish which unfortunately is part of my job now as a staff architect, I give a lot of real world examples that couldn’t easily be fabricated by AI and lean on those examples as I make my points.
The instinct is still there. Of course it is. We can override it in a way other creatures probably can't, but expecting people to not be influenced by instinct and such is a non-starter.
The baseline for human behavior is set by expectations.
Some tens of thousand years ago we also had the instinct to savagely kill whoever lived in the nearby village.
My point was about the mutual influence between instincts and expected behaviours. But you seem to swap instincts for conscious interest. That would be another discussion, but I don't believe homo economicus ever walked on this planet.
Pretty great for a chat app used by a few billion people, a few $billion is enough to keep things running for many decades. e.g. banks do exactly this, with much more critical and complicated infra.
Revolutions need targets. What's the target for how socialization has gone the past few decades? What material, unpopular opinion entity can be abolished to fix things?
reply