Hacker News new | past | comments | ask | show | jobs | submit | more sslayer's comments login

Apparently you have never heard the Cinemax nickname "Skinemax". Both HBO and Cinemax were known for premium adult entertainment.


> HBO has always stood for premium entertainment for adults.


Imagine if we could take advantage of the time-space potential/differences between gravitational areas to "skip" large parts of space. We would have to have a very precise gravity map, but could also get huge gravity potential boost!


Could we? Is there math that supports this? Wouldn't this require taking some, and then not losing it when the wave passes? I don't see much happening with the earth, as it's experiencing them, so I assume the "extraction" process would be significant and unique.


If anyone else is like me, I'm just looking for the next community so that I can ditch reddit. I'm looking for a site that can maintain free speech,limit the bot noise,draw in real users and foster a real community spirit and attitude without selling out to corporatism, over moderation, and overall corrupt tendencies.


I’m trying out federated via lemmy (lemmy.ml, lemmy.one, self-host, whatever) which interoperates with mastodon etc. If somebody were to start a larger lemmy server and do Reddit scraping for even just the next couple days, it would probably gain traction pretty quickly. But also that sounds expensive for an enthusiast



I guess it makes sense that the only people dedicated and motivated enough to actually build their own stuff in the modern world are neo-nazis and tankies.


I'm in the same boat, looking at lemmy and mastodon. I think the key will be to hit some critical mass quickly enough.


Are you willing to pay?


I paid for Apollo and would pay them regularly. I'd feel better about it if the money went to people with similar principles rather than soulless profit maximizers, which is what would happen if Apollo split off and offered a paid model.


Having any access to paying for the necessities of life should NEVER be in the hands of a system that can be corrupted.


Every system can be corrupted. Feels like Americans are so afraid of the slippery slope at every turn, instead of accepting it is there always and the onus is on the people to manage them correctly and participate in the policy decision making that does that.


That's like complaining that Americans are so afraid of house fires that they strive to build their houses from flame-retardant materials and avoid the use of open flame, instead of accepting that house fires just happen and the onus is on the fire department to come by and put them out.


No it’s not. I’m saying Americans are afraid of slippery slopes and it’s leaving them paralyzed to affect any change, instead of accepting that the world is messy and the role of governing is to navigate this. In your analogy it would be like they are afraid to build any houses because a house fire could destroy it.


You just know at least one person got nailed for it.


Supposedly


Well, because then we can hold someone else responsible instead of having any personal responsibility for our own actions.


Worse than a patent troll. How can we justify allowing these types of predators?


I think "allowing" is a mischaracterization.

The article says the attorney "signed a contract with the city in 2020 to file foreclosure lawsuits against properties with unpaid code fines".

So it sounds like it is a deliberate set-up.


Where are all the second amendment idiots when it comes to economic tyranny? Maybe the lawyer in the article should try foreclosing the homes of some redneck Florida Man instead of a white collar art worker.


It's unfortunate as a society we don't punctuate our months with something as simple as an extended 4 day break.


I think it would be better for everyone to get four days a month to choose as days off. Then it would be better distributed. I would prefer this to a four work week. After a few days in a row the freedom really sinks in, so I'd love to be able to take a week off every month, guilt free.


Its probably way too late to implement something like this but the idea is still a good one.

Thinking of it, wasn't there a "metric" calendar or something that sort of did this?


The French Republican Calendar had five/six complementary days [0] at the end of the year that were part of no particular month. They were, however, spent contiguously between summer and fall, rather than between each season.

[0] https://en.wikipedia.org/wiki/French_Republican_calendar#Com...


She don't lie, she don't lie, she don't lie, cocaine...


Pandora's box has already been opened. There is no way any government could prevent what is coming.


Let’s make this more constructive, why is there nothing that can’t be done ? If we really as a civilisation wanted to avoid catastrophe do you really think at this stage we couldn’t pull back ?

We’re on hacker news, you need to be creative and be a hacker, but just a person who likes the Internet.

If people start using ChatGPT-5 to hack critical infrastructure and take down power grids, do we just give up and die ? Or adapt ?

I’m going to float a pretty controversial idea…the technology we have today is an experiment, the digital world. Humans can survive without it.

Already it’s causing problems such as the spread of misinformation, addiction and social division. There is nothing to say that we can’t unwind a lot of it endangers our futures. Technology is supposed to be a tool to help us out, it’s not supposed to endanger our lives. Clearly it’s out of control and causing a lot of anxiety, we’re moving quicker than we can adapt and that’s also not good for technological progress either, so we’re on the wrong path.

Edit: Have a read of this: https://www.reddit.com/r/singularity/comments/12by8mj/i_used... , if we don’t tear ourselves apart, there’s a good chance the backbone of the Internet will be torn apart long before much else happens.


"If people start using ChatGPT-5 to hack critical infrastructure and take down power grids, do we just give up and die ? Or adapt ?"

Why do I keep hearing stuff like this?

First off, if ChatGPT-5 comes out and makes hacking critical infrastructure and taking down power grids easy, what makes you think its ability to counter that by hardening systems won't go up too?

And second, what makes you think that ease is the main thing stopping people from committing terrorism? We know that you can cause widespread and long lasting damage by firing at a fragile metal boxes that take months to replace and affects the ability to power entire regions.

There's something to be said about not needing to be physically there... but fear of getting caught is not really what keeps people from being terrorists. The fact is for all the unfounded pessimism the 24 hour news cycle has birthed, people just generally don't want to take down power grids, even for fun, or even if it's easy, our just out of curiosity.

-

There are reasonable angles if you want to argue for responsible AI, "it's going to turn people into terrorists" is not one of them.


Defense always has the weak hand. A malicious group with $5 million to fund hacking will defeat an organization with $500 million in defense. The only reason organizations survive is that there's far fewer hackers than organizations in the world and most hackers don't have a lot of resources, even with their offensive advantage.

Add AI to the mix and the effective available resources become much more equalized.


Was trying to say a similar thing but you did a much better job. This and thank you.


> people just generally don't want to take down power grids

I don't think this is a binary thing (people either want to, or not)

it's a function of how difficult it is to accomplish vs. level of zealotry

some fluffy econuts currently don't have the dedication or the means to destroy the power grid

but they sure as hell would like to if it was made much easier

and AI makes it much, much, much easier


Let's be clear, AI is not making it easier.

Some imagined advancement of AI so great that it trivializes the kinds of attacks that nation states study is.

But you're applying that to the concept of the world as it exists now.

At that point the concept of just being able to infiltrate the power grid because it's designed in a way that's flawed to infiltration isn't a given.

The idea you can't have some tireless force actively adapting to every single imaginable exploit isn't a given.

The idea you can't have a tireless force trying to come up with attacks to in turn be mitigated isn't a given.

The idea we can't suddenly design massively independent renewable power supplies with thousands of sudden advancements isn't a given.

To be honest, the idea we can't disrupt zealotry isn't a given, just like people think it'll be used to "hack" society for the worse, why can't it be used to hack society or even the individual to be less likely to want to do that? I mean if it can take down power grids easily... how much more quickly can we move to a post scarcity environment?

This doomsday scenario requires not owning your mind to what the model could do except nefarious things... but the nefarious things you're describing would be literal miracles. The odds it can only preform miracles that are taking down power grids are pretty low.


> Let's be clear, AI is not making it easier.

Technically, it is already easy without AI, in relative terms.

> The idea you can't have some tireless force actively adapting to every single imaginable exploit isn't a given.

However, the force attempting exploit would be other AI's at that point, not humans. If we assume the rapid disparity in intelligence power as a result of exponential growth, then you must assume some nation states will have orders of magnitude more intelligence power than others.

It is easy to imagine an end where everything is already balanced; however, between current point and time and that end is an enormous moat that is filled with such problems that we might not arrive to the other side. We certainly don't need significantly more powerful AI for it to be used to create significantly greater disruptions in society as we are already nearing a potential period of unverifiable truth and reality.


> Technically, it is already easy without AI, in relative terms.

This just feels like an awkward attempt to draw SCADA into the conversation. You're countering your own point: If it's relatively easy today, and that's proof that it's not difficulty of attacking that's saving us.

> nation states will have orders of magnitude more intelligence power than others

I'm not talking about nation states. You wanted to allude to it right? If they wanted to attack the power grid today, they could do it. The doomsaying is in this comment section about how "we're giving people robotic weapons in their garages"

> It is easy to imagine an end where everything is already balanced

What I am describing is not at all relying on balance. It's relying on the inherent asymmetry that attackers need to solve multiple problems that don't move the needle on the goal to get the LLM to attack the system, while defenders can get the LLM closer to the system and with a better understanding of it, in order to gain mitigations.

In fact if anything balance would help the attackers: In a balanced end-game anyone can get their hands on an unaligned model or build one with the capabilities we're likely to have on an individual level. But we're hurtling towards the opposite, where unaligned models lag generations behind aligned because of commercial interests.


> If it's relatively easy today, and that's proof that it's not difficulty of attacking that's saving us.

It could be perceived that way, but the argument is also that if it becomes easy enough, some actors will participate. It is the argument used for jailing current LLM capabilities.

> "we're giving people robotic weapons in their garages"

On long enough timeline this would inevitably be true if AI plays out as proponents envision. The question becomes does AI become an effective counter of all such power advancements. There will likely still be disparity among AI's of individuals and personal use. Unless society becomes more centrally managed by a global AI.

> inherent asymmetry that attackers need to solve multiple problems

Isn't there also asymmetry in that the attackers only need to find a single exploit, but the defenders need to have found all exploits beforehand?

> But we're hurtling towards the opposite, where unaligned models lag generations behind aligned because of commercial interests.

We don't have any "aligned" models. It is an unsolved problem and models have turned out to be relative easy to replicate at significant lower cost than the major commercial investments.


> Let's be clear, AI is not making it easier.

er, no

> Some imagined advancement of AI so great that it trivializes the kinds of attacks that nation states study is.

what? a load of co-ordinated rednecks with some basic knowledge could manage it quite easily


Red team has to find one exploit to win, blue team has to stop every exploit to win.


That's completely outdated thinking if you reach the level of AI being described.

Complex systems could be completely self healing and self quarantining, "red team" can be freely interrogated "blue team" and convinced to create attacks that are then mitigated.

And again, the AI itself would improve at self-interrogation, so we're saying "trivialize", but trivial as in tricking a system capable of hacking into power grids with ease into ignoring its training.

People who go to this doomsday scenario fail to extend any sort of lateral thinking.

An LLM that trivializes taking down the power grid would not be "GPT 4 + SCADA infiltration", it'd be a new paradigm in how humanity operates.


Blue team has to imagine every possible exploit before it occurs for a system that is essentially black box, has unknown emergent behaviors and the input to the system is anything that can be described by human language.

Who wants to take those odds?


Yes but for the people who do want to be terrorists, you now have a much much more powerful weapon in order to inflict damage with no ?

How many autogpt scripts have you seen that upgrade the security of critical infrastructure ?


How many autogpt scripts can trivialize taking down the power grid?

At that point why are you imagining we have something like the current power grid? We'd have a tool that's drive massive advancements that trivialize decentralized power generation, reduce scarcity, find cures for mental illness, improve equality.

People need to really expand their understanding of how this will disrupt human existence if it can actually reach that point.

It's like imagining what would happen if you gave the roman nuclear warheads instead of what would happen if you gave them all of modern technology


> How many autogpt scripts can trivialize taking down the power grid?

Why does it need to be autogpt? It can rather create sophisticated plans for humans to exploit. Early nefarious use is more likely to go this route.

> We'd have a tool that's drive massive advancements that trivialize decentralized power generation, reduce scarcity, find cures for mental illness, improve equality.

Intellectual knowledge and cyber capabilities will vastly outstrip physical manufacturing. These threats will likely appear long before this transformation occurs.


>Why does it need to be autogpt? It can rather create sophisticated plans for humans to exploit. Early nefarious use is more likely to go this route.

That'd be infinitely harder for an LLM: Problems where the context window can grow based on feedback of its actions (like automated attacks) are the ones that will be solved much much earlier.

The teams defending against attacks will be able to trivially apply LLMs in an automated fashion to finding mitigations, if you're already admitting it's going to be LLMs writing plans that then need humans to implement them, you're countering your own point...

> Intellectual knowledge and cyber capabilities will vastly outstrip physical manufacturing. These threats will likely appear long before this transformation occurs.

Right, so you're imagining the intellectual knowledge and cyber capabilities to trivially commit terrorism, but scarcity and equality are untouched because of manufacturing.

Sci-fi has really done a number on some people.


> The teams defending against attacks will be able to trivially apply LLMs

Who are these imaginary teams defending against attacks? How does the LLM defend against another human who plans to target something that is not a LLM?

> if you're already admitting it's going to be LLMs writing plans that then need humans to implement them, you're countering your own point...

I'm just not limiting to only a single context. This is the very basis for jailing the current LLMs.

> Right, so you're imagining the intellectual knowledge and cyber capabilities to trivially commit terrorism, but scarcity and equality are untouched because of manufacturing.

Hardware revolutions always lag intellectual and cyber revolutions. Do you see that changing?

> Sci-fi has really done a number on some people.

Indeed


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: