Agreed. I started playing Satisfactory a few years ago, before I ever picked up Factorio. And I played A LOT of Satisfactory. Into the thousands of hours.
I'm currently at a couple hundred hours into Factorio and I can safely say I like it better. It is way more polished and way more in-depth. It has definitely had much more time to mature and respond to user feedback. Also the threat level keeps it more interesting. But I do find Satisfactory much more beautiful, and I think the idea of having belts go directly into/out of buildings, etc, without the need for inserters, a much better style than inserters.
Factorio has "loaders" but they're not enabled in the default game, but easy enough to add with a mod (some do as part of the mod, some enable just them, some even try to make them balanced).
Better thought of as an immune modulator than suppressant. Ultimately it’s a blunt tool when we lack more specific agents. Yes, it will have risks including infection, but the benefits may (or may not) outweigh the risks.
Hi. My Yamaha receiver no longer has net radio access, so, an rPi for my internet radio preset physical button favorites. Same theory in the car rPi to play internet radio when you hit one of the physical FM presets.
Can I just say what a dick move it was to do this as a 12 days of Christmas. I mean to be honest I agree with the arguments this isn’t as impressive as my initial impression, but they clearly intended it to be shocking/a show of possible AGI, which is rightly scary.
It feels so insensitive to that right before a major holiday when the likely outcome is a lot of people feeling less secure in their career/job/life.
Thanks again openAI for showing us you don’t give a shit about actual people.
Or maybe the target audience that watches 12 launch videos in the morning are genuninely excited about the new model. The intended it to be a preview of something to look forward to.
It sounds like you aren't thinking about this that deeply then. Or at least not understanding that many smart (and financially disinterested) people who are, are coming to concerning conclusions.
>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.
Almost every single one of the people OpenAI had hired to work on AI safety have left the firm with similar messages. Perhaps you should at least consider the thinking of experts?
I agree, it’s just more about the intent than anything else, like boasting about your amazing new job when someone has recently been made redundant, just before Christmas.
I don't know, maybe it's a bit off topic, but at least in cases that I'm imagining, I would always hire a human than fully rely on AI. Let the human consult with AI if needed, but still finalize the decision or result. The human will be thinking about the problem for months or years, even if passively during a vacation, an idea will occasionaly pop up. AI will think about its task for seconds, in case it missed some information or whatever, it will never wake up in the middle of the night thinking "s**, i forgot about X"
I feel you. It's tough trying to think about what we can do to avert this; even to the extent that individuals are often powerless, in this regard it feels worse than almost anything that's come before.
Out of interest, what do you think would happen to your sense of subjective experience on sci-fi upload? And secondly have you watched black mirror? In that show they show many great ways there the end of death is just the beginning of eternal techno suffering.
I would think it would not lead to a transfer of consciousness and instead just make a copy of you. I recommend black mirror, it deals with one technological change (usually) and shows how it can be dystopian (usually, there are occasional happy endings). Each episode is standalone.
I'm hoping we get to the stage fairly soon that we can make AI with something like human consciousness and be able to study and understand it better. That stuff will probably start as a very crude model and get closer as both AI and brain science advance. I figure a way to avoid dystopian problems is to experiment and play around with it so you figure how it works. Most dystopian examples I've come across in real life have been very much driven by human behaviour rather than tech.
Yeah maybe black mirror but I'm not sure it's really my thing.
This is a you problem. Yes there will be pain in short term, but it will be worth it in long term.
Many of us look forward to what a future with AGI can do to help humanity and hopefully change society for the better, mainly to achieve a post scarcity economy.
>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.
Almost every single one of the people OpenAI had hired to work on AI safety have left the firm with similar messages. Perhaps you should at least consider the thinking of experts? There is a real chance that this ends with significant good. There is also a real chance that this ends with the death of every single human being. That's never been a choice we've had to make before, and it seems like we as a species are unprepared to approach it.
Wtf is wrong with you dude? It's just another tech, some jobs will get worse some jobs will get better. Happens every couple of decades. Stop freaking out.
This is not a very kind or humble comment. There are real experts talking about how this time is different -- as an analogy, think about how horses, for thousands of years, always had new things to do -- until one day they didn't. It's hubris to think that we're somehow so different from them.
>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.
Are you that upset that this guy chose to trust the people that OpenAI hired to talk about AI safety, on the topic of AI safety?
In which case you may like how it’s done in the UK. it’s technically debt but in essence works as a graduate tax. The government pays for your education with a loan. You then only pay back 9% of your income over a certain income threshold. You do this until you pay back the loan or 30-40 years have passed. So in practice this is a graduate tax.
For most taxes you expect higher earners to pay more but this is not the case with student loans because high earners pay of their loans quickly whereas lower earners end up paying far more in interest.
An actual graduate tax would be far less regressive than the current system
Could also have a minimum duration (for example 3 years) where you pay even if you go over the original loan amount.
That would mean people that get great paying jobs right out of college would pay more than they even borrowed, but it would be justified because the degree would likely have had a big impact if it was so soon after finishing the degree.
It could be argued that a similar level of caution should be applied to artificial neural networks.
I always find the discrepancy in application of ethics between biology and technology to be an interestingly wide gulf.
On searching for the mentioned model “bad one” it sounds like it’s provided by a British company “Brit alliance”, although these may be modified robots from another company.