What would be the difference in prompts/info for Claude vs ChatGpt? Is this just based on anecdotal stuff or is there actually something I can refer to when writing prompts? I mostly use Claude, but don't really pay much attention to the exact wording of the prompts
> It's not a coincidence that I train a model on healthcare regulations and it answers a question about healthcare regulations
If you train a model on only healthcare regulations it wont answer questions about healthcare regulation, it will produce text that looks like healthcare regulations.
I don't think you're the right person to make any claim of "complete misunderstanding" when you claim that training an LLM on regulations would produce a system capable of answering questions about that regulation.
LLMs are trained on text, only some of which includes facts. It's a coincidence when the output includes new facts not explicitly present in the training data.
> It's a coincidence when the output includes facts,
That's not what a coincidence is.
A coincidence is: "a remarkable concurrence of events or circumstances without apparent causal connection."
Are you saying that training it on a subset of specific data and it responding with that data "does not have a causal connection"> Do you know how statistical pattern matching works?
It's not coincidence that the answer contains the facts you want. That is a direct consequence of the question you asked and the training corpus.
But the answer containing facts/Truth is incidental from the LLMs point of view, in that the machine really does not care, nor even have any concept of whether it gave you the facts you asked for or just nice-sounding gibberish. The machine only wants to generate tokens, everything else is incidental. (To the core mechanism, that is. OpenAI and co obviously care a lot about quality and content of the output)
Totally agree with that. But the problem is the phrase "coincidence" makes it into something it absolutely isn't. And it's used to try and detract from what these tools can actually do.
They are useful. It's not a coin flip as to whether Bolt will produce a new design of a medical intake form for me if I ask it to. It does. It doesn't randomly give me a design for a social media app, for instance.
> "Something that describes how an AI is convincing if you don't understand its reasoning, and close to useless if you understand its limitations."
This made me laugh. Because it's the exact opposite sentiment of anti-LLM crowd. So which is it? Is it only useful if you know what you're doing or less useful if you know what you're doing?
> "I can't wait until I can jack into the Metaverse and buy an NFT with cryptocurrency just by using an LLM! Perhaps I can view it on my 3D TV by streaming it over WIMAX? I'd better stock up on quantum computers to make sure it all works."
In the author's attempt to be a smartass, they showed themselves. It makes them sound childish. Instead of just admitting they were wrong, they make some flippant remark about cryptocurrency and NFT'S, despite having vastly different purposes and goals and successes. Just take the L.
to add: "I shouldn't have to know anything about LLMs to use them correctly" is one heck of a take, but ok.
> "I don't. I hate the way this is being sold as a universal and magical tool. The reality doesn't live up to the hype."
And I hate the way in which people will do the opposite: claim it has no uses cases. It's literally the same sentiment, but in reverse. It's just as myopic and naive. But for whatever reason, we can look at a CEO hawking it and think "They're just trying to make more money" but can't see the flipside of devs not wanting to lose their livelihoods to something. We have just as much to lose as they have to gain, but want to pretend like we're objective.
“We inherited a financial crisis unlike any that we’ve seen in our time,”
That's a fact of reality. Trying to compare what Trump is doing to simply stating that we were in a financial crisis when Obama took over is while every single argument with someone from the right should be done knowing they don't stand on principles and will literally redefine words and reality to try and make a point.
isLoading should be a boolean; it will maintain referential equality across renders and not cause a re-render unless it is an actual edge from false to true or vice versa.
what? what does what you said have anything to do with what I said? how does "data flowing down" now related to the use effect dependency cycle? are you new to programming?
When I first saw JSX, I immediately thought I'd hate it. Then I jumped boat to React after years with AngularJs/Angular 2+ after hooks and functional components came in and to this day I still enjoy writing React. And I love JSx.
> Is that just an early-career thing, or is there some way I can get better at giving up the idea of perfection?
No, you just have to really, really get it into your head that perfection most often results in missed deadlines and never truly finishing anything.
One of my hot takes is that anybody that considers themselves a perfectionist but can also list off things they've completed doesn't actually understand what true perfectionism means. They're just bragging about their attention to detail.
I agree, but not because impure engineering is bad (like OP is suggesting). It's because tech debt is often a result of changing requirements. It's part of the nature of impure software development. Thinking tech debt is a reflection of poor engineering or bad practices is just plain wrong, imo. It's part of the job.
There is this fantasy (primarily pushed by the pure devs), that impure/enterprise dev is like following a recipe. The stakeholder gives you the requirements (ingredients, how to cook) and your job is to execute that. And anyone that has spent a non-trivial amount of time in the enterprise world knows this just not how it works. Unlike a video game where there will eventually be an end date, impure devs often have to build solutions that don't have an expiration date. And that's not easy development, trust me.
reply