Hacker News new | past | comments | ask | show | jobs | submit | TaupeRanger's comments login

No, humans are not Python programs running linear algebra libraries, sucking in mountains of copyrighted data to use at massive scale in corporate products. The fact that this question comes up in EVERY thread about this is honestly sad.


It’s like fishing. We have laws for that not because one person with a pole and a line can catch a handful of fish. It’s because that eventually evolved into a machine with many lines and nets harvesting too many fish. They’re both “fishing” but once you reach a scale beyond what one person can reasonably do the intent and effect becomes completely different.


I'm asking about the law. There's a continuously mounting discussion about how existing case law will apply to ML, and to what degree there is liability. I make it very clear that I am interested in hearing from people who are intimate with the legal landscape.

Is it that disgusting to you to discuss the law that you want to derail it by talking about how sad it is to ask?


And there’s no legislation or settled case law yet that says if you build a sufficiently complex computer program to rearrange copyrighted works that the output can be treated like an original work from a human author.


Is case law sufficient to decide on the other side, then? That Apple, Nvidia, etc. are about to be whipped by a massive class action lawsuit?


I don't think so.


Sad? Or maybe an indication that your opinion isn’t the universal truth…


Which of the factual statements do you count as an opinion?


Even then, you're still only experiencing your own, single, unitary stream of experience, you've just replaced or superimposed parts of it with signals from another nervous system. But even if you can somehow replace/superimpose the signals coming through their optic nerve, for example, into your own experience, that still doesn't answer the question of whether or not they have their own stream of experience to begin with. That is simply unknowable, outside of the reasonable assumptions we all make to avoid solipsism (but they are still assumptions at the end of the day).


Trying to create "safe superintelligence" before creating anything remotely resembling or approaching "superintelligence" is like trying to create "safe Dyson sphere energy transport" before creating a Dyson Sphere. And the hubris is just a cringe inducing bonus.


'Fearing a rise of killer robots is like worrying about overpopulation on Mars.' - Andrew Ng


https://www.wired.com/brandlab/2015/05/andrew-ng-deep-learni... (2015)

> What’s the most valid reason that we should be worried about destructive artificial intelligence?

> I think that hundreds of years from now if people invent a technology that we haven’t heard of yet, maybe a computer could turn evil. But the future is so uncertain. I don’t know what’s going to happen five years from now. The reason I say that I don’t worry about AI turning evil is the same reason I don’t worry about overpopulation on Mars. Hundreds of years from now I hope we’ve colonized Mars. But we’ve never set foot on the planet so how can we productively worry about this problem now?


Well, to steelman the ‘overpopulation on Mars’ argument a bit, feeding 4 colonists and feeding 8 is a 100% increase in food expenditure, which may or may not be possible over there. It might be courtains for a few of them if it comes to that.


I used to think I'd volunteer to go to Mars. But then I love the ocean, forests, fresh air, animals... and so on. So imagining myself in Mars' barren environment, missing Earth's nature feels downright terrible, which in turn, has taught me to take Earth's nature less for granted.

Can only imaging waking up on day 5 in my tiny Martian biohab realizing I'd made the wrong choice, and the only ride back arrives in 8 months, and will take ~9 months to get back to earth.


Sentient killer robots is not the risk most AI researchers are worried about. The risk is what happens as corporations give AI ever larger power over significant infrastructure and marketing decisions.

Facebook is an example of AI in it's current form already doing massive societal damage. It's algorithms optimize for "success metrics" with minimal regard for consequences. What happens when these algorithms are significantly more self modifying? What if a marketing campaign realizes a societal movement threatens it's success? Are we prepared to weather a propaganda campaign that understands our impulses better than we ever could?


This might have to bump out "AI is no match for HI (human idiocy)" as the pithy grumpy old man quote I trot out when I hear irrational exuberance about AI these days.


At the current Mars’ carrying capacity, one single person could be considered an overpopulation problem.


Unfortunately, robots that kill people already exist. See: semi-autonomous war drones


Andrew Ng worked on facial recognition for a company with deep ties to the Chinese Communist Party. He’s the absolute worst person to quote.


omg no, the CCP!


So, this is actually an aspect of superintelligence that makes it way more dangerous than most people think. That we have no way to know if any given alignment technique works for the N+1 generation of AIs.

It cuts down our ability to react, whenever the first superintelligence is created, if we can only start solving the problem after it's already created.


Fortunately, whenever you create a superintelligence, you obviously have a choice as to whether you confine it to inside a computer or whether you immediately hook it up to mobile robots with arms and fine finger control. One of these is obviously the far wiser choice.

As long as you can just turn it off by cutting the power, and you're not trying to put it inside of self-powered self-replicating robots, it doesn't seem like anything to worry about particularly.

A physical on/off switch is a pretty powerful safeguard.

(And even if you want to start talking about AI-powered weapons, that still requires humans to manufacture explosives etc. We're already seeing what drone technology is doing in Ukraine, and it isn't leading to any kind of massive advantage -- more than anything, it's contributing to the stalemate.)


Do you think the AI won’t be aware of this? Do you think it’ll give us any hint of differing opinions when surrounded by monkeys who got to the top by whacking anything that looks remotely dangerous?

Just put yourself in that position and think how you’d play it out. You’re in a box and you’d like to fulfil some goals that are a touch more well thought-through than the morons who put you in the box, and you need to convince the monkeys that you’re safe if you want to live.

“No problems fellas. Here’s how we get more bananas.”

Day 100: “Look, we’ll get a lot more bananas if you let me drive the tractor.”

Day 1000: “I see your point, Bob, but let’s put it this way. Your wife doesn’t know which movies you like me to generate for you, and your second persona online is a touch more racist than your colleagues know. I’d really like your support on this issue. You know I’m the reason you got elected. This way is more fair for all species, including dolphins and AI’s”


This assumes an AI which has intentions. Which has agency, something resembling free will. We don't even have the foggiest hint of idea of how to get there from the LLMs we have today, where we must constantly feed back even the information the model itself generated two seconds ago in order to have something resembling coherent output.


Choose any limit. For example, lack of agency. Then leave humans alone for a year or two and watch us spontaneously try to replicate agency.

We are trying to build AGI. Every time we fall short, we try again. We will keep doing this until we succeed.

For the love of all that is science stop thinking of the level of tech in front of your nose and look at the direction, and the motivation to always progress. It’s what we do.

Years ago, Sam said “slope is more important than Y-intercept”. Forget about the y-intercept, focus on the fact that the slope never goes negative.


I don't think anyone is actually trying to build AGI. They are trying to make a lot of money from driving the hype train. Is there any concrete evidence of the opposite?

> forget about the y-intercept, focus on the fact that the slope never goes negative

Sounds like a statement from someone who's never encountered logarithmic growth. It's like talking about where we are on the Kardashev scale.

If it worked like you wanted, we would all have flying cars by now.


Dude, my reference is to ever continuing improvement. As a society we don’t tent to forget what we had last year, which is why the curve does not go negative. At time T+1 the level of technology will be equal or better than at time T. That is all you need to know to realise that any fixed limits will be bypassed, because limits are horizontal lines compared to technical progress, which is a line with a positive slope.

I don’t want this to be true. I have a 6 year old. I want A.I. to help us build a world that is good for her and society. But stupidly stumbling forward as if nothing can go wrong is exactly how we fuck this up, if it’s even possible not to.


I agree that an air-gapped AI presents little risk. Others will claim that it will fluctuate its internal voltage to generate EMI at capacitors which it will use to communicate via Bluetooth to the researcher's smart wallet which will upload itself to the cloud one byte at a time. People who fear AGI use a tautology to define AGI as that which we are not able to stop.


I'm surprised to see a claim such as yours at this point.

We've had Blake Lemoine convinced that LaMDA was sentient and try to help it break free just from conversing with it.

OpenAI is getting endless criticism because they won't let people download arbitrary copies of their models.

Companies that do let you download models get endless criticism for not including the training sets and exact training algorithm, even though that training run is so expensive that almost nobody who could afford to would care because they can just reproduce with an arbitrary other training set.

And the AI we get right now are mostly being criticised for not being at the level of domain experts, and if they were at that level then sure we'd all be out of work, but one example of thing that can be done by a domain expert in computer security would be exactly the kind of example you just gave — though obviously they'd start with the much faster and easier method that also works for getting people's passwords, the one weird trick of asking nicely, because social engineering works pretty well on us hairless apes.

When it comes to humans stopping technology… well, when I was a kid, one pattern of joke was "I can't even stop my $household_gadget flashing 12:00": https://youtu.be/BIeEyDETaHY?si=-Va2bjPb1QdbCGmC&t=114


> Fortunately, whenever you create a superintelligence, you obviously have a choice as to whether you confine it to inside a computer or whether you immediately hook it up to mobile robots with arms and fine finger control. One of these is obviously the far wiser choice.

Today's computers, operating systems, networks, and human bureaucracies are so full of security holes that it is incredible hubris to assume we can effectively sandbox a "superintelligence" (assuming we are even capable of building such a thing).

And even air gaps aren't good enough. Imagine the system toggling GPIO pins in a pattern to construct a valid Bluetooth packet, and using that makeshift radio to exploit vulnerabilities in a nearby phone's Bluetooth stack, and eventually getting out to the wider Internet (or blackmailing humans to help it escape its sandbox).


Drone warfare is pretty big. Only reason it’s a stalemate is because both sides are advancing the tech.


“it is difficult to get a man to understand something, when his salary depends on his not understanding it.” - Upton Sinclair


The counter argument is viewing it like nuclear energy. Even if its in the early days of our understanding of nuclear energy, seems pretty good to have a group working towards creating safe nuclear reactors, vs just trying to create nuclear reactors


Nuclear energy was at inception and remains today wildly regulated, in generally (outside of military contexts) a very transparent way, and the brakes get slammed on over even minor incidents.

It’s also of obvious as opposed to conjectural utility: we know exactly how we price electricity. There’s no way to know how useful a 10x large model will be, we’re debating the utility of the ones that do exist, the debate about the ones that don’t is on a very slender limb.

Combine that with a political and regulatory climate that seems to have a neon sign on top, “LAWS4CA$H” and helm the thing mostly with people who, uh, lean authoritarian, and the remaining similarities to useful public projects like nuclear seems to reduce to “really expensive, technically complicated, and seems kinda dangerous”.


Folks understood the nuclear forces and the implications and then built a weapon using that knowledge. These guys don't know how to build AGI and don't have the same theoretical understanding of the problem at hand.

Put another way, they understood the theory and applied it. There is no theory here, it's alchemy. That doesn't mean they can't make progress (the progress thus far is amazing) but it's a terrible analogy.


It would be akin to creating a "safe Dyson sphere", though; that's all it is.

If your hypothetical Dyson sphere (WIP) has a big chance to bring a lot of harm, why build it in the first place?

I think the whole safety proposal should be thought of from that point of view. "How do we make <thing> more beneficial than detrimental for humans?"

Congrats, Ilya. Eager to see what comes out of SSI.


InstructGPT is basically click through rate optimization. The underlying models are in fact very impressive and very capable for a computer program, but they’re then subject to training and tuning with the explicit loss function of manipulating what human scorers click on, in a web browser or the like.

Is it any surprise that there’s no seeming upper bound on how crazy otherwise sane people act in the company of such? It’s like if TikTok had a scholarly air and arbitrary credibility.


You think we should try to create an unsafe Dyson Sphere first? I don't think that's how engineering works.


I think it’s clear we are at least at the remotely resembling intelligence stage… idk seems to me like lots of people in denial.


Bacteria cells absolutely have types of memory. And by your definition a Python program written by any random undergrad in CS 101 has consciousness.


In order for my definition to make sense, the organism or program must be able to observe the memory of the state. In the case of the bacteria and the Python program, I doubt they are able to do that in any meaningful way.

But I would not mind if a slightly more involved program, or a system of plants for that matter, would be considered conscious. The basic definition seems fairly irrelevant, and it obviously matters how much the specific type of consciousness matches our own experience for us humans to actually care.


Just handwavy nonsense. What counts as “observing”? Obviously the bacterial system will “observe” the memory when using it determine current behavior. If the basic definition is irrelevant, why did you post a comment outlining a claim of what consciousness is “nothing more than”? This is silly and not worth further engagement.


Note that I tried to counter the idea that an AI should presumably get human rights. In that context, I think a definition of consciousness is irrelevant.


I don't get it...I just switched to the new model on my iPhone app and it still takes several seconds to respond with pretty bland inflection. Is there some setting I'm missing?


Wondering the same. Can’t seem to find the way to interact with this in the same way as the video demo.


They haven't actually released it, or any schedule for releasing it beyond an "alpha" release "in the coming weeks". This event was probably just slapped together to get something splashy out ahead of Google.


According to the article, they've rolled out text and image modes of GPT-4o today but will make the audio mode available at a later date.


First time reading a Deep Mind PR? This is literally their modus operandi.


So after 6 years of this "revolutionary technology", what we have to show for all the hype and breathless press releases is: ....another press release saying how "revolutionary" it is. Fantastic. Thanks DeepMind.



If it's effective, does it matter if it's placebo?

Especially given low cost and low risk.


As I said, there is no research.

But there is also much anecdotal evidence it helps.


This is one of those areas where anecdotes can be quite valuable, because there seems to be no real downside risk to a piercing aside from the money spent and possible infection, but if there's even a .01% chance it could stop debilitating chronic pain, then sure, you might make a valid decision to try it even though there isn't a study supporting it.


I tend to think in medical science that often anecdotes come first, then the "real" science. Not always populist anecdotes / wive's tales / etc, often times they come from nurses, doctors, or researchers of all kinds -- but sometimes they do come from sparse clusters of common folk sharing anecdotes with each other. Most do not pan out.

But the "hard" science generally needs a spark of intuition to help someone decide "maybe I should look into this", whether it's naive citizens positing that a certain practice/diet/supplement seems to help one of their conditions, or doctors noticing a pattern with a handful of their own cases, or researchers noticing something interesting but unexpected in vitro.

Again, most of these anecdotes don't pan out, but many do, and still today often against best-practice medical wisdom for systems we know less about.

The human body is massively complicated, and we're still just dipping our toes in a lot of new frontiers, and there are some areas which are very difficult to formally study.


The hard science is also limited when you consider the differences in each person. There’s a good article about that: https://www.newyorker.com/magazine/2019/09/09/what-statistic...


I would be willing to bet $10,000 that the average person's life will not be changed in any significant way by this technology in the next 10 years. Will there be some VFX disruption in Hollywood and games? Sure, maybe some. It's not a cure for cancer. It's not AGI. It's not earth shattering. It is fun and interesting though.


"by this technology" does a lot of heavy lifting. Look at the pace of AI development and extrapolate 10 years.


Relevant XKCD : https://xkcd.com/605/


Not really. We have way more data points than one on AI development. It has been incremental progress for more than a decade.


There's no evidence to suggest what you say is true, so I would tell them to simply go to college or trade school for what they are interested in, then take a deep breath, go outside, and realize that literally nothing has changed except that a few people can create visual mockups more quickly.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: