Hacker Newsnew | past | comments | ask | show | jobs | submit | Engineering-MD's commentslogin

I like the threat of factory destruction. It adds to the flavour, and makes designing the factory not just about pure optimisation


Agreed. I started playing Satisfactory a few years ago, before I ever picked up Factorio. And I played A LOT of Satisfactory. Into the thousands of hours.

I'm currently at a couple hundred hours into Factorio and I can safely say I like it better. It is way more polished and way more in-depth. It has definitely had much more time to mature and respond to user feedback. Also the threat level keeps it more interesting. But I do find Satisfactory much more beautiful, and I think the idea of having belts go directly into/out of buildings, etc, without the need for inserters, a much better style than inserters.


Factorio has "loaders" but they're not enabled in the default game, but easy enough to add with a mod (some do as part of the mod, some enable just them, some even try to make them balanced).

https://mods.factorio.com/mod/loaders-modernized or https://mods.factorio.com/mod/vanilla-loaders-hd or this one made by a Wube developer: https://mods.factorio.com/mod/aai-loaders


Depends how much you want to avoid caged hens


Food culture has (in general) moved on in the UK from that period!


Better thought of as an immune modulator than suppressant. Ultimately it’s a blunt tool when we lack more specific agents. Yes, it will have risks including infection, but the benefits may (or may not) outweigh the risks.


In your opinion which services are most worthwhile hosting on a rpi? I’m interested in it, but life always gets in the way.


Hi. My Yamaha receiver no longer has net radio access, so, an rPi for my internet radio preset physical button favorites. Same theory in the car rPi to play internet radio when you hit one of the physical FM presets.


Can I just say what a dick move it was to do this as a 12 days of Christmas. I mean to be honest I agree with the arguments this isn’t as impressive as my initial impression, but they clearly intended it to be shocking/a show of possible AGI, which is rightly scary.

It feels so insensitive to that right before a major holiday when the likely outcome is a lot of people feeling less secure in their career/job/life.

Thanks again openAI for showing us you don’t give a shit about actual people.


Or maybe the target audience that watches 12 launch videos in the morning are genuninely excited about the new model. The intended it to be a preview of something to look forward to.

What a weird way to react to this.


It sounds like you aren't thinking about this that deeply then. Or at least not understanding that many smart (and financially disinterested) people who are, are coming to concerning conclusions.

https://www.transformernews.ai/p/richard-ngo-openai-resign-s...

>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.

Almost every single one of the people OpenAI had hired to work on AI safety have left the firm with similar messages. Perhaps you should at least consider the thinking of experts?


There is no AGI it’s just marketing, this stuff if over hyped, enjoy your holidays you won’t lose your job ;)


I agree, it’s just more about the intent than anything else, like boasting about your amazing new job when someone has recently been made redundant, just before Christmas.


The vast majority of people who will lose jobs to AI aren’t following AGI benchmarks, or even know what AGI is short for.


That’s is true and a reasonable point. But looking in This thread you can see there has been this reaction from quite a few.


I don't know, maybe it's a bit off topic, but at least in cases that I'm imagining, I would always hire a human than fully rely on AI. Let the human consult with AI if needed, but still finalize the decision or result. The human will be thinking about the problem for months or years, even if passively during a vacation, an idea will occasionaly pop up. AI will think about its task for seconds, in case it missed some information or whatever, it will never wake up in the middle of the night thinking "s**, i forgot about X"


I feel you. It's tough trying to think about what we can do to avert this; even to the extent that individuals are often powerless, in this regard it feels worse than almost anything that's come before.


Some of us actual people are actually enthusiastic about AGI. Although I'm a bit weird in being into the sci-fi upload / ending death stuff.


Out of interest, what do you think would happen to your sense of subjective experience on sci-fi upload? And secondly have you watched black mirror? In that show they show many great ways there the end of death is just the beginning of eternal techno suffering.


I'm not quite sure - we need to work on the details. I've not watched very much black mirror.


I would think it would not lead to a transfer of consciousness and instead just make a copy of you. I recommend black mirror, it deals with one technological change (usually) and shows how it can be dystopian (usually, there are occasional happy endings). Each episode is standalone.


I'm hoping we get to the stage fairly soon that we can make AI with something like human consciousness and be able to study and understand it better. That stuff will probably start as a very crude model and get closer as both AI and brain science advance. I figure a way to avoid dystopian problems is to experiment and play around with it so you figure how it works. Most dystopian examples I've come across in real life have been very much driven by human behaviour rather than tech.

Yeah maybe black mirror but I'm not sure it's really my thing.


Blaming OpenAI for progress is like blaming a calendar for Christmas—it’s not the timing, it’s your unwillingness to adapt


Unwillingness to adapt to the destruction of the middle class and knowledge work is pretty reasonable tbh.


Historically when tech has taken over jobs people have done ok, they've just done something else, usually something more pleasant.


Wow, you just solved the ethics of technology in a one liner. Impressive.


This is a you problem. Yes there will be pain in short term, but it will be worth it in long term.

Many of us look forward to what a future with AGI can do to help humanity and hopefully change society for the better, mainly to achieve a post scarcity economy.


Surely the elites that control this fancy new technology will share the benefits with all of us _this_ time!


No it'll be like when tech took over 97% of agricultural work with 97% of us starving while all the money went to the farm elites.


How did that go for the farm workers?


I guess they did other stuff instead.


https://www.transformernews.ai/p/richard-ngo-openai-resign-s...

>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.

Almost every single one of the people OpenAI had hired to work on AI safety have left the firm with similar messages. Perhaps you should at least consider the thinking of experts? There is a real chance that this ends with significant good. There is also a real chance that this ends with the death of every single human being. That's never been a choice we've had to make before, and it seems like we as a species are unprepared to approach it.


Post scarcity seems very unlikely. Humans might be worthless, but there will still be a finite number of AIs, compute, space, resources.


How are you going to make housing, healthcare, etc. not scarce, and pay for them?


Robots supply that, controlled by democratic government.


Robots supply the land and physical labor that underlie the price of housing? Are you thinking of space colonies or something?

You need to make these expensive things nearly free if you're going to speak of post scarcity.


Robots supply the physical labour. The land shortages are largely regulatory - there's a lot of land out there or you could build higher.


I hate the deliberate fear-mongering that these companies pedal on the population to get higher valuations


Wtf is wrong with you dude? It's just another tech, some jobs will get worse some jobs will get better. Happens every couple of decades. Stop freaking out.


This is not a very kind or humble comment. There are real experts talking about how this time is different -- as an analogy, think about how horses, for thousands of years, always had new things to do -- until one day they didn't. It's hubris to think that we're somehow so different from them.

Notably, the last key AI safety researcher just left OpenAI: https://www.transformernews.ai/p/richard-ngo-openai-resign-s...

>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.

Are you that upset that this guy chose to trust the people that OpenAI hired to talk about AI safety, on the topic of AI safety?


Perhaps we will return to meeting in real life. Physical presence probably has a few decades left before robots are indistinguishable from humans


In which case you may like how it’s done in the UK. it’s technically debt but in essence works as a graduate tax. The government pays for your education with a loan. You then only pay back 9% of your income over a certain income threshold. You do this until you pay back the loan or 30-40 years have passed. So in practice this is a graduate tax.


For most taxes you expect higher earners to pay more but this is not the case with student loans because high earners pay of their loans quickly whereas lower earners end up paying far more in interest.

An actual graduate tax would be far less regressive than the current system


Could also have a minimum duration (for example 3 years) where you pay even if you go over the original loan amount.

That would mean people that get great paying jobs right out of college would pay more than they even borrowed, but it would be justified because the degree would likely have had a big impact if it was so soon after finishing the degree.


Australia does something similar (it's called HECS if you want to search for details).


It could be argued that a similar level of caution should be applied to artificial neural networks. I always find the discrepancy in application of ethics between biology and technology to be an interestingly wide gulf.


On searching for the mentioned model “bad one” it sounds like it’s provided by a British company “Brit alliance”, although these may be modified robots from another company.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: