Hacker Newsnew | past | comments | ask | show | jobs | submit | mexicocitinluez's commentslogin

What would be the difference in prompts/info for Claude vs ChatGpt? Is this just based on anecdotal stuff or is there actually something I can refer to when writing prompts? I mostly use Claude, but don't really pay much attention to the exact wording of the prompts

> When LLMs say something true, it’s a coincidence of the training data that the statement of fact is also a likely sequence of words;

Do you know what a "coincidence" actually is? The definition you're using is wrong.

It's not a coincidence that I train a model on healthcare regulations and it answers a question about healthcare regulations correctly.

None of that is coincidental.

If I trained it on healthcare regulations and asked it about recipes, it won't get anything right. How is that coincidental?


> It's not a coincidence that I train a model on healthcare regulations and it answers a question about healthcare regulations

If you train a model on only healthcare regulations it wont answer questions about healthcare regulation, it will produce text that looks like healthcare regulations.


And that's not a coincidence. That's not what the word "coincidence" means. It's a complete misunderstanding of how these tools works.

I don't think you're the right person to make any claim of "complete misunderstanding" when you claim that training an LLM on regulations would produce a system capable of answering questions about that regulation.

> you claim that training an LLM on regulations would produce a system capable of answering questions about that regulation.

Huh? But it does do that? What do you think training an LLM entails?

Are you of the belief that an LLM trained on non-medical data would have the same statical chance of answering a medical question correctly?

we're at the "Redefining what words means in order to not have to admit I was wrong" stage of this argument


LLMs are trained on text, only some of which includes facts. It's a coincidence when the output includes new facts not explicitly present in the training data.

> It's a coincidence when the output includes facts,

That's not what a coincidence is.

A coincidence is: "a remarkable concurrence of events or circumstances without apparent causal connection."

Are you saying that training it on a subset of specific data and it responding with that data "does not have a causal connection"> Do you know how statistical pattern matching works?


Can I offer a different phrasing?

It's not coincidence that the answer contains the facts you want. That is a direct consequence of the question you asked and the training corpus.

But the answer containing facts/Truth is incidental from the LLMs point of view, in that the machine really does not care, nor even have any concept of whether it gave you the facts you asked for or just nice-sounding gibberish. The machine only wants to generate tokens, everything else is incidental. (To the core mechanism, that is. OpenAI and co obviously care a lot about quality and content of the output)


Totally agree with that. But the problem is the phrase "coincidence" makes it into something it absolutely isn't. And it's used to try and detract from what these tools can actually do.

They are useful. It's not a coin flip as to whether Bolt will produce a new design of a medical intake form for me if I ask it to. It does. It doesn't randomly give me a design for a social media app, for instance.


> "Something that describes how an AI is convincing if you don't understand its reasoning, and close to useless if you understand its limitations."

This made me laugh. Because it's the exact opposite sentiment of anti-LLM crowd. So which is it? Is it only useful if you know what you're doing or less useful if you know what you're doing?

> "I can't wait until I can jack into the Metaverse and buy an NFT with cryptocurrency just by using an LLM! Perhaps I can view it on my 3D TV by streaming it over WIMAX? I'd better stock up on quantum computers to make sure it all works."

In the author's attempt to be a smartass, they showed themselves. It makes them sound childish. Instead of just admitting they were wrong, they make some flippant remark about cryptocurrency and NFT'S, despite having vastly different purposes and goals and successes. Just take the L.

to add: "I shouldn't have to know anything about LLMs to use them correctly" is one heck of a take, but ok.

> "I don't. I hate the way this is being sold as a universal and magical tool. The reality doesn't live up to the hype."

And I hate the way in which people will do the opposite: claim it has no uses cases. It's literally the same sentiment, but in reverse. It's just as myopic and naive. But for whatever reason, we can look at a CEO hawking it and think "They're just trying to make more money" but can't see the flipside of devs not wanting to lose their livelihoods to something. We have just as much to lose as they have to gain, but want to pretend like we're objective.


3D TVs and metaverses and WiMAX and all that are prior examples of massively overhyped technological failures.

(They missed the Segway.)


You think that Github Copilot, for instance, is a technological failure?

What about Bolt? The tool that I use to create designs for me. That's a failure, too?


> Generally, yes. As part of an executive order, no.

Huh? Can you offer a single example of this pre-Trump?


Some examples of Obama criticizing the Bush administration here: https://www.gbtribune.com/opinion/columnists/five-years-late...

lol, gtfo

“We inherited a financial crisis unlike any that we’ve seen in our time,”

That's a fact of reality. Trying to compare what Trump is doing to simply stating that we were in a financial crisis when Obama took over is while every single argument with someone from the right should be done knowing they don't stand on principles and will literally redefine words and reality to try and make a point.


> In React, you know for sure that if your reference changes, the component reading that reference as a prop will re-render.

Amen. Data flows down. That, to me, is one of React's biggest strengths.


Amen because of this we have useeffect and calling the whole component tree on a isloading change

isLoading should be a boolean; it will maintain referential equality across renders and not cause a re-render unless it is an actual edge from false to true or vice versa.

what? what does what you said have anything to do with what I said? how does "data flowing down" now related to the use effect dependency cycle? are you new to programming?

Jquery didn't provide anything that you couldn't already do with native browser APIs, it was a wrapper around those APIs.


When I first saw JSX, I immediately thought I'd hate it. Then I jumped boat to React after years with AngularJs/Angular 2+ after hooks and functional components came in and to this day I still enjoy writing React. And I love JSx.


> Is that just an early-career thing, or is there some way I can get better at giving up the idea of perfection?

No, you just have to really, really get it into your head that perfection most often results in missed deadlines and never truly finishing anything.

One of my hot takes is that anybody that considers themselves a perfectionist but can also list off things they've completed doesn't actually understand what true perfectionism means. They're just bragging about their attention to detail.


I agree, but not because impure engineering is bad (like OP is suggesting). It's because tech debt is often a result of changing requirements. It's part of the nature of impure software development. Thinking tech debt is a reflection of poor engineering or bad practices is just plain wrong, imo. It's part of the job.

There is this fantasy (primarily pushed by the pure devs), that impure/enterprise dev is like following a recipe. The stakeholder gives you the requirements (ingredients, how to cook) and your job is to execute that. And anyone that has spent a non-trivial amount of time in the enterprise world knows this just not how it works. Unlike a video game where there will eventually be an end date, impure devs often have to build solutions that don't have an expiration date. And that's not easy development, trust me.


lol.

I mean, of course it does? Requirements change, priorities change, rules/laws change, which all contributes to a constantly moving codebase.

When you're building a video game, do you have to worry about Congress changing healthcare laws? Cmon.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: