Hacker News new | past | comments | ask | show | jobs | submit login

> And safely here means non disruptive to established businesses

Why would OpenAI care about that?

No, it means safety, as in not giving out dangerous answers that get people killed.




Because they belong to Microsoft. Because they are funded by rich people who own established businesses. Because they are the privileged .01% who are invested in the status quo. Because their whole networks consist of people with entrenched business interests. Pick one or more. Why would you think rich American engineers are in general any more worried about the overall safety of the world then their own self interest? That's what they show day in day out with their decisions.


> Because they belong to Microsoft

They don't — 49%, and they get to buy themselves out of even that, and if they do that by eating MS's lunch? Then it would suck to be an MS shareholder.

And they've been cautious since before the investment, with the cautious announcement of GPT-2 a month before the change in corporate structure to allow investors.

But even if OpenAI were owned completely by MS, why should MS care? Disrupting all employment sounds like the raison d'être for most if not all of silicon valley.

At the very least, imagine all the employees MS could let go if GPT-4 was even 80% as good as the advertisements in the form of "buy my guide for how to use GPT to make a fortune with best selling books/award winning advertisements/fantastic business proposals" keep claiming.

> Why would you think rich American engineers are in general any more worried about the overall safety of the world then their own self interest?

One is a subset of the other.

People regularly disagree about how risky things are, but they don't generally pick "destroy all things" over "destroy that which is close to me" when they actually believe that's the real choice and not, say, a panicked Chicken Little (which is how I think LeCun sees Yudkowsky).


MS not owning OpenAI outright is a technicality. "It would suck to be an MS shareholder" due to things done to subsidiaries of MS constitutes dereliction of duty by MS officers. MS is not Silicon Valley. They're based in Seattle and predate 95% of SV. Besides this is just one of my points. Even if everything you claimed were to be true, OpenAI still is, as I pointed out, part of the American Corporate establishment and despite their PR posturing, they won't provide us with technology to disrupt said establishment.

They are clearly trying to angle for maximizing the profitability of that establishment.


> OpenAI still is, as I pointed out, part of the American Corporate establishment and despite their PR posturing, they won't provide us with technology to disrupt said establishment

They're not acting like they're part of the American corporate establishment, and I don't understand why you think they are.

Rather than quote my other comment comparing against 3M, I'll link to it: https://news.ycombinator.com/item?id=36677182


Pointing to a company that was fined for dishonesty as an argument for taking another company's PR at face value makes no sense.


Eh? People as a rule care about others and this tendency actually INCREASES as they get wealthier and safer as are less worried about themselves.

Your average OpenAI machine learning expert cares plenty about not killing people, just like you do.


> as in not giving out dangerous answers that get people killed

Literally taken, that is quite close to impossible.

It was news days ago of somebody who committed suicide after having an interaction with a bot about nuclear risks or similar.

To avoid that, the bot would have to be a high-ranking professional psychologist with an explicit purpose not to trigger destructive reactions.

And that would fail the nature of a "consultant", which is something "under best effort to speak the truth" - incompatible direction with "something reassuring".


> To avoid that, the bot would have to be a high-ranking professional psychologist with an explicit purpose not to trigger destructive reactions.

Sounds fairly dooable, TBH, given what else they're doing reasonably well at.

> And that would fail the nature of a "consultant", which is something "under best effort to speak the truth" - incompatible direction with "something reassuring".

I don't think they're as incompatible as you appear to.


With reference to the case I mentioned (which seems like a decent border case to point out difficulties), what you want as a consultant is something that tells you "the likelihood of the event is bound to this and that", not something that goes "Oh, everything will be all right, dear". Outside figurative: truth (say, facts, or instances of computation) and comfort may not always be compatible.

> reasonably well

But I wrote «high-ranking professional». «Reasonably well» does not cover the cases relevant to full safety.

And, by the way, such attitude will backfire heavily on false positives.

Anyway: the case presented is there to point to a few difficulties involved in the idea of «not giving out dangerous answers».


"I can't conceive of such failure mode thus we're safe"


Sorry, I do not understand what you mean...


I think they read the first line of your reply and took it as you arguing that it's nearly impossible for an LLM to give output that would get someone killed.


I was agreeing with them. Why assume wilful ignorance? Have we become the new reddit? People just yelling in disagreement? Maybe it's time to log out and delete this password...


No, Namaria, we were just trying to understand what you meant - which for us resulted cryptic.


Apologies, just found your post confusing and was trying to make sense.


if that is the bar to pass for content for public consumption, we're dangerously close to book burning, again.


Because there's more profit in a corporation paying you billions than in thousands of people paying you $10/mo each.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: