Hacker News new | past | comments | ask | show | jobs | submit | notshift's comments login

Without opening the link, the problem with every piece of data I’ve seen from Tesla is they’re comparing apples to oranges. FSD won’t activate in adverse driving conditions, aka when accidents are much more likely to occur. And/or drivers are choosing not to use it in those conditions.


Yeah, it's a dupe of the one that gets manually kicked off the front page after less than a day.

Shameful, really. The importance of public critique & awareness of OpenAI's leadership should go beyond the interests of YC.


There seems to be some mysterious method by which negative OpenAI comments and stories get downranked. The story ChrisArchitect links is missing at https://news.ycombinator.com/news?p=2 -- it appears to be outranked by other stories which are both older and have fewer upvotes.

dang has confirmed this happened in the past, but hasn't provided details. See this discussion I had with him 6 months ago: https://news.ycombinator.com/item?id=38342850

I'm currently composing an email to dang requesting some transparency.


Update: dang replied to my email. It seems HN actually does a fair amount of manual tweaking of story ranks. dang sent me these links:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...

dang says: "The only thing we care about is not having tedious internet indignation dominating HN threads—and boy has there been a lot of that on OpenAI lately."

Also: "there are a few people who aren't mods in the sense of working for YC but whom you could call mods in the sense that they have some limited moderation abilities, one of which is to mark subthreads as offtopic or generic"


It's 'off the front page' because it's old news. 700+ people upvoted on it and commented on it. Lots of eyeballs. Lots of discussion. Over there. Just try and keep the discussion together.


>The importance of public critique & awareness of OpenAI's leadership should go beyond the interests of YC.

"Should" is doing a lot of heavy lifting; YC is ultimately about getting rich, and unfortunately there are a lot of pro-Altman types 'round these parts who share that "wealth at all costs" mentality.

I'll take my downvotes.


I missed this yesterday.

So basically Altman is a dishonest pos. Will fit right in with much of Silicon Valley then.

…power and greed seems to get into all of their heads. Next thing they are part of an NSA Prism program so they can feel like spies too.


You gotta hand it to him, most big tech bigwigs do a good job pretending to be soft-spoken thought leaders. It takes a special kind of narcissist to insist on destroying their own image early.


Or colliding head on for the first time with a huge bunch of non-yes people…

Which doesn’t bring out the nicer side of narcissists


So what happened to Daniel Kokotajlo, the ex-OAI employee who made a comment saying that his equity was clawed back? Was it a miscommunication and he was referring to unvested equity, or is Sama just lying?

In the original context, it sounded very much like he was referring to clawed-back equity. I’m trying to find the link.



> Half of our DNA is just old viruses which every cell in our body reproduces for no reason.

Seems like every time people think big things in the body are there for no reason, new developments later show them to be wrong.

I don’t care to say more than the above because I will already get downvoted for even this much.


I think these are called retroviruses [1]. Not an expert but if I remember correctly, that DNA can sometimes mutate and turn into something useful and this has happened before so they're not totally useless. But most of the time that DNA is inactivated, non-coding so it doesn't make any proteins.

[1]https://en.wikipedia.org/wiki/Retrovirus


The fun part is when you realize that the DNA in the nucleus of the cell is only the majority of our genetic information. There is DNA im the mitochondria and there are many side channels which carry genetics. We have a far journey ahead to even understand what we can already map. Not to disparage the efforts done but there is much more to this


What are the 'many side channels'?


You only get knee and back problems when you're old, which is because the body just stops repairing itself as effectively as it does when you're young.

It's going to be far easier to re-enable the ability to self-repair our bodies as we do in youth than to completely change the function of two of our limbs.


Completely false. Lots of people in their 20s have knee and back problems. Torn ACLs are common in athletes and people who participate in sports. I had a coworker a few years ago who had his ACL replaced; he was in his mid-to-late 20s.


Please re-read and reconsider the context of my post. This reddit-tier "must seek to contradict and attack others" behavior doesn't do anyone any good.


Ok, fine. I re-read it, and it says: "You only get knee and back problems when you're old, which is because..."

Your statement is blatantly false. It says, literally, you only get knee and back problems when you're old. This is wrong, and stupid.

Why are you arguing this?


Check out Rumble. I go there by default nowadays because most of the content creators I follow have been banned from YouTube and moved there as a replacement.


Yeah, but then you are surrounded by a bunch of neo nazi. It's like Voat.


Has it improved? In the past, whenever I've seen someone share a Rumble link, it was always the cringiest right-wing stuff, basically Parler for video.


You’re asking someone who is seeking out that sort of content, I don’t think you’re going to get the answer you’re looking for.


Before long OpenAI will be able to implement FSD by trivially handing the video feed and API calls to GPTX and asking it to drive.

I think the Tesla AI team is just unserious about the problem, probably largely because of Musk being a poor leader for the project.


The technology isn't close to doing anything like this yet.


Been involved with LLMs for the past 2 years, and what I can say is we have no idea what the technology can do, and we have no way of monitoring when it gains new abilities. We're racing towards autonomous everything, and we're too slow and blind to even detect hidden exponential developments.

Here's a good overview to get up to speed: https://youtu.be/xoVJKj8lcNQ


That is a great overview. It covers everything very succinctly.


Question , what do we do if/when it gets to that point?

Tech keeps advancing, most people just seem to say, “it’s not there yet”, the entire point of the tech industry is now moving to get us “there” with absolutely no idea what the consequences are. I don’t find this intelligent at all, ironically ?

I like the idea of progress but stating to feel like enough is enough without a at least some clear idea about where we want it to end. I really really don’t want to see terminators in my lifetime anymore than I want to see human cloning, which is banned.

This IMO is the point where tech starts to go from cool and helpful to potential sci-fi disaster.


How would it get to that point? There's no connection between having an internet connection and ending the human race.

These are all sci-fi stories that come with unexamined assumptions that something "smart" and "optimal" is going to be invented that's so good at its job that you can ask it to do X and it's going to do completely impossible thing Y without running out of AWS credits first.

(I personally think that humans are not "optimal" and that an AGI will also not be "optimal" or else it wouldn't be "general". More importantly, I don't think AGIs are going to be great at their jobs just because they have computer brains, and this is clearly an old SF movie trope.)


I don't find slippery slope fallacies convincing either.


This is such an amazingly shortsighted, naive, and flawed way of thinking I'm having a really hard time not sniping.

A large number of people are very concerned about this, and rightfully so because so many people don't get the risks (including yourself it seems).

These people largely aren't fearmongers either, they are experts in that field, serious engineers.

A computer can't do this you say... and that is how it has always been right up until it can, and then whole ecosystems shift seemingly overnight. This will be no different.

Let me ask you, where's the risk management? You deal with anything dangerous you've dealt with risk management. Where is it for this? Can we even evaluate a problem like this? Our main form of interaction is by code, we work in seconds it ticks in nanoseconds, by the time it receives input from us, it could have predicted and nullified our attempts to do anything if it were sentient.

Right now, its very simple, there is almost no risk management, and you and the smartest people in the world trying to tackle this problem are clawing in the dark blind, and you don't know it, but they do and the ones with true intelligence are scared shitless which is why there are so many people going on record (a thing normally that would be a career killer), trying to prevent you from driving everyone over that proverbial cliff, only its more like a dam.

For you and most other people that don't work with this stuff, its an out of context problem that will never happen, and that's fine for small things that don't cascade.

People are traditionally very bad at recognizing cascading failures before it actually happens. This is like a dam with a crack running through it that almost no one has noticed, and your home is right underneath it, in this case everyone's home is underneath it.

What could possibly go wrong with giving someone, really anyone, who doesn't recognize the risks the ability to potentially end everything if the digital dice line up just right.

Literally Everything is networked. Globally.

It doesn't even need to be Battlestar Galactica type apocalypses, though that's fairly realistic pilot about how it might go down if it became sentient. It can also do it without even being sentient by the slow Ayn Rand/John Galt route where societal mechanics do the majority of the work, all you need to do is disrupt the economic cycle between factor and non-factor markets to a sufficient degree, and people will do the rest, plenty of examples where we were able to restart in the historic record, what about those dark areas for which we have no history; without modern technology we can't grow enough food to feed half the worlds current population.

When the stakes are this high, and the risk management is so nonexistent; everyone including policy makers should be scared shitless and do something about it. If you look at things like how the Manhattan project were handled, they were done with more risk management and care for the amount of destruction potential than either bio or cyber.

Our modern day society is largely fully dependent on technology for survival. What happens when that turns against you, or just ceases to function.


"These people largely aren't fearmongers either, they are experts in that field, serious engineers."

How many "experts in their field" think GPT 4 can end the human race?


Currently, 50% of AI researchers think there's a 10% chance or higher that human civilization will be wiped out in the near future as a result of our inability to control AI.

Another person posted a great youtube overview that sums up and covers the broad points in a 1 hour presentation.

I suggest you watch it to catch up. You'll notice one particular thing is absent, AGI sentience doomsday isn't discussed, though it is a valid risk case too, its not what most experts are concerned with. What does concern the experts is the lack of risk management, and the exponential on exponential growth.

With that kind of growth, its not enough to keep pace, you have to predict accurately where it will be and somehow exceed it, two almost impossible problems.

I highly suggest you review the video, and take the time needed to process what the experts are saying before discounting and minimizing something so impactful.


I'd be interested in what survey you are referring to. What came up in a search is this:

https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

And 50% of A.I. researchers certainly didn't take that survey.

I also see no reference to the "near future" in the question

The question would include an A.I. destroying the human race 7000 years from now.

But I was mainly responding to your comment that "one might even go so far as to argue a literal public attempt at ending the human race."

Unless GPT-4 specifically is believed to be a threat to the human race your comment was hyperbole.

I'll take a look at that video.

Edited at add: and the video quotes a survey that you were clearly referring to that says nothing about time scale. There's no claim the "near" future is involved.

Also, the question is vague and certainly isn't asking if a Chatbot will destroy the human race.


> I'd be interested in what survey you are referring to.

Its the same as mentioned in the video.

> But I was mainly responding to your comment ... > Unless GPT-4 specifically is believed to be a threat to the human race ...

That's flawed logic. A false dichotomy, and also begs the question as to who decides it is a threat.

As for whether its dangerous, I think the fact that the model they discussed in that video shipped and deployed publicly before anyone knew it had embedded knowledge of research grade chemistry capable of some horrific things, all without the knowledge of the people who designed it. It was only discovered after the fact, and that is pretty disturbing.

With dangerous and existential threats, its not considered safe until deemed unsafe, its by-default considered unsafe until deemed its safe. That's how you limit tragedies.

We can disagree, but if we do I sincerely hope you do not touch this stuff.


It's banally true that intentionally putting an A.I. in charge of our nucleur arsenal might be dangerous.

My point is someone can answer a survey stating A.I. could destroy our species without believing GPT-4 is existentially dangerous.


You've changed your argument which causes me to be skeptical of your credibility

Not everyone is equally educated, the two are not mutually exclusive.

People can say either. Educated, rational and reasonable people would say yes on both if they do the risk management analysis and understand the factors driving how it will be used.


Except for, you know, the entire field of functional medicine which has done wonders for the lives of millions who have been failed by the mainstream medical establishment.


As the other commenter pointed out, functional medicine has no backing in evidence. As an anecdote a functional "doctor" recently told my mother-in-law to have all of her fillings removed to treat a thyroid problem, which is obviously nonsense.


Functional medicine has not been proven to work. There's no actual evidence it's effective.


What's your standard of evidence? The functional medicine folks I follow and who I am a client of generally follow the latest research closely. In the content they produce and in the books they write, they refer to and cite hundreds of studies.


They do love to cite studies but there aren't any studies that back up their claims and not RCTs of their methods at all. Here's a good overview https://sciencebasedmedicine.org/aafp-functional-medicine-la....


I was hoping for critiques of specific functional medicine treatments that were administered, but didn't find many in the article. Their link about heavy metal poisoning being "rejected by medical science" is a broken link.

Besides that the article is mostly just attempted character assassination of various people and goes after some of the more fringe elements / treatments in the field.

A huge part of functional medicine is just helping the patient address lifestyle factors - diet, exercise, sleep, stress, and removing negative environmental influences, whether those are a person, a diet issue (particular foods can do a lot of damage), a physical environmental issue such as mold, etc.

The first step that every (decent) functional medicine doctor takes with a new patient is just looking for nutritional deficiencies and other lifestyle factors, and doing tests and asking questions to check for those things. Completely uncontroversial and backed up by science that everyone on earth agrees with. But regular doctors don't even take that basic step for their patients because it takes too long to actually learn about a person, their background, their personality, etc. and actually help them create lifestyle change.

Beyond that, I wish the critics would actually address the more common treatments in functional medicine instead of the fringe stuff. Things like alternative treatments for parasites (blastocystis, various worms, dientamoeba, etc.), candida overgrowth, SIBO, heavy metal poisoning, various autoimmune issues, nutritional deficiencies, hormonal issues, etc.

The modern healthcare system is broken in a hundred ways but instead of having any curiosity about alternative approaches folks like the author of this article just attack the field and the people in it rabidly.


To add to what others have said, we know from psychology that the human brain works by building a (causative) model of the world, and then propagating the signal where our model goes awry and differs from what we experience.

GPT will start becoming really powerful when you just give it arms and legs and let it build it's own training dataset the way regular animals do (in the process of evolution & throughout their lives).


We can imagine a child being born paralyzed. Couldn't it grow up to become just as smart as us?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: