In addition to Unhook, I also use SponsorBlock [1] and Return YouTube Dislike [2].
While SponsorBlock is something that some people might not want/need, I find Return YouTube Dislike to be particularly useful because the like/dislike ratio is a valuable data point for a video I'm about to watch even if it's just an approximate/predicted value.
Sure, but the problem is S and X sound very similar when spoken, causing more confusion. Try clarifying which one you are talking about in a loud room at a conference.
This is true and we do not talk about it enough. Moreover, Capitalism is itself an unaligned AI, and understanding it through that lens clarifies a great deal.
People experience existential terror from AI because it feels like massive, pervasive, implacable forces that we can't understand or control, with the potential to do great harm to our personal lives and to larger social and political systems, where we have zero power to stop it or avoid it or redirect it. Forces that benefit a few at the expense of the many.
What many of us are actually experiencing is existential terror about capitalism itself, but we don't have the conceptual framework or vocabulary to describe it that way.
It's a cognitive shortcut to look for a definable villain to blame for our fear, and historically that's taken the form of antisemitism, anti-migrant, anti-homeless, even ironically anti-communist, and we see similar corrupted forms of blame in antivax and anti-globalist conspiracy thinking, from both the left and the right.
While there are genuine x-risk hazards from AI, it seems like a lot of the current fear is really a corrupted and misplaced fear of having zero control over the foreboding and implacable forces of capitalism itself.
Probably not even that specific, more like an underlying fear that 8 billion people interacting in a complex system will forever be beyond the human capacity to grasp.
So, this has happened multiple times. Its best case.example.is.eugenics, where "intellectuals" believe.they can degermine what.the best traits are.in a.complex system and prune sociery to achieve some perfect outcomr.
The peoblrm, of course, is the sysyrm is complex, filled with hidden variables.and humans will tend to focus entirrly on phenotypes which are the easiest to observe.
Thesr modrls will do the same humanbbiased selection and grabitateb to a substatially vapid mean.
Well, we do have a conceptual framework and vocabulary for massive, pervasive and implacable forces beyond our understanding - it's the framework and vocabulary of religion and the occult. It has actually been used to describe capitalism essentially since capitalism itself, and it's been used explicitly as a framework to analyze it at least since Deleuze. Arguably, since Marx : as far as I'm aware, he was the first to personalize capital as an actor in and of itself.
tl;dr: Fear of the unknown. The problem is more and more people don't know anything about anything, and so are prone to rejecting and retaliating against they don't understand while not making any effort to understand before forming an emotionally-based opinion.
The way it works in any country where workers can't afford to buy the products today, so I imagine as in those countries that function most like the stereotypical African developing country.
So I imagine that the result would be that industry devolved into the manufacturing of luxury products, in the style of the top-class products of ancient Rome.
The machines can buy the products. We already have HFT, which obviously has little to do with actual products people are buying or selling. Just number go up/down.
The Anthropic folk were concerned enough that they left, and are indeed continuing to work on it [AI safety].
Now, we've have the co-leads of the super-alignment/safety team leaving too.
Certainly not a good look for OpenAI.
There really doesn't seem to be much of a mission left at OpenAI - they have a CEO giving off used car salesman vibes who recently mentioned considering allowing their AI to generate porn, and now releasing a flirty AI-girlfriend as his gift to humanity.
On the other hand, Anthropic founders reason for leaving also yielded them angle to start a new successful company, now worth 9+ figures. Given that I’m not sure I’ll take their concerns about the state of OpenAI at face value.
I've watched all the Dario/Daniela interviews I can find, and I guess that's a fair way of putting it. It seems they genuinely felt (& were) constrained at OpenAI in being able to follow a safety-first agenda, and have articulated it as being "the SF way" to start a new company when you have a new idea (so maybe more cultural than looking for an angle), as well as being able to follow a dream of working together.
From what we've seen of OpenAI, it does seem that any dissenting opinions will be bulldozered by Altman's product and money making focus. The cult-like "me too" tweets that huge swathes of the staff periodically send seems highly indicative of the culture.
Anthropic does seem genuine about their safety-first agenda, and strategy to lead by example, and have been successful in getting others to follow their safe scaling principles, AI constitution (cf OpenAI's new "Model Spec") and open research into safety/interpretability. This emphasis on safe and steerable models seems to have given them an advantage in corporate adoption.
If your ostensible purpose is being sidelined by decision makers, trying to fight back is often a good option, but sometimes you fail. Admitting failure and focusing on other approaches is the right choice at that point.
One could argue that at this point openai is being Extended and Embraced by Microsoft and is unlikely to have much autonomy or groundbreaking impact one way or another.
> This kinda confirms, unfortunately and sadly, that ChatGPT answers are probably just as good as human answers.
These people are paid to follow scripts and strict protocols. At best, this may suggest that ChatGPT answers are as good as a call center representative's answers.
I have over ten years experience with Unreal, so the cost of switching engines would be very high, and must offer something irresistible in return.
Regarding your second question: Unreal effectively has no licensing fee for most developers. The engine is free, you get access to the full source code, and you pay a 5% royalty only after you have made $1 million gross revenue.
You are right, AI is nothing but a tool akin to a pen or a brush.
If you draw Mickey Mouse with a pencil and you publish (and sell) the drawing who is getting the blame? Is the pencil infringing the copyright? No, it's you.
Same with AI. There is nothijg wrong with using copyrighted works to train an algorithm, but if you generate an image and it contains copyrighted materials you are getting sued.
Publicly available doesn't mean you have a license to do whatever you like with the image. If I download an image and re-upload it to my own art station or sell prints of it, that is something I can physically do because the image is public, but I'm absolutely violating copyright.
That's not an unautharized copy, it's unauthorized distribution. By the same metric me seeing the image and copying it by hand is also unauthorized copy (or reproduction is you will)
Then I don't really understand your original reply. Simply copying a publicly available image doesn't infringe anything (unless it was supposed to be private/secret). Doing stuff with that image in private still doesn't constitute infringement. Distribution does, but that wasn't the topic at hand
The most basic right protected by copyright is the right to make copies.
Merely making a copy can definitely be infringement. "Copies" made in the computing context even simply between disk and RAM have been held to be infringement in some cases.
Fair use is the big question mark here, as it acts to allow various kinds of harmless/acceptable/desirable copying. For AI, it's particularly relevant that there's a factor of the "transformative" nature of a use that weighs in favor of fair use.
The answer is "it depends". Distribution is not a hard requirement for copyright violation. It can significantly impact monetary judgements.
That said, there is also an inherent right to copy material that is published online in certain circumstances. Indeed, the physical act of displaying an image in a browser involves making copies of that image in cache, memory, etc.
I like the UI design of Threads and I’m trying to enjoy the app, but it keeps flooding my feed with desperate, attention-seeking women in suggestive clothing and positions. I never have this problem on X.
I don‘t use Instagram often, but I recently met a girl irl who wanted to connect. I open the search in the app and it instantly recommends exclusively girls showing off their bodies, even though I only follow friends otherwise. It was quite embarrassing.
It's showing me a lot of luxury handbag influencers that obviously target a very different demographic. I'm absolutely sure I never looked for anything like that (I follow mostly science/scientist accounts and comics and I'm male). I also keep getting ads for commercial pilot insurance and I'm definitely not in that demographic either. Instagram's recommendation model seems to work well if you fit a common categorization well (women interested in luxury handbags perhaps), and utterly break down when you don't. I'm a software engineer, I don't need specialty carts to move Boeing aircraft engines.
No, it just shows what your "demographic" mostly wants. And unfortunately for me as a gay man, the average straight man thirsts after scantily clad chicks on insta etc, even if they have to demean and debase themselves for a crumb of attention.
Google's getting wiser tho, I went from getting insta thirst ads to getting anime weaboo girl ads, to getting half naked dragon girl ads, until finally I recently got advertised some gay furry stuff. Nice.
Ahaha, well I'm pretty sure it was for some heavily monetised crapware with art that you can pretty much find on e6 anyways.
That said, chatgpt is pretty decent at role playing, if you know how to trick it into it (still actually not that hard). It's an interesting topic though; some people are resistant to playing that sort of "game" with it, even though those people are fine with in-game romances such as those in Mass Effect and Baldur's Gate.
Thirstbait is the default for these companies. I don't access YouTube with any Google cookies so their profiling is suppressed. They periodically load up their front page with T&A content despite never having a personal history of clicking on such content.
Confirmed. I tried tiktok but was flooded with 3/4 naked teenage girls before I could do anything on the platform. It's like they ignored my onboarding preferences completely and just said here...she so thirsty.
This was happening to me too (Instagram but same thing); it’s pretty easy to fix.
Ultimately it is because I wasn’t using Instagram actively enough so the algorithm defaulted to Late 20s Male profiling.
Spend 2-5 minutes selecting the posts you don’t want to see and training the algorithm.
Instagram: Press and hold on items on the search tab and select “not interested”. To give it your actual interests, search for a few things like Golf, Cars, etc.
Threads: Select the three dots on posts and choose “Hide”. Same as Instagram, search for a few generic topics you prefer.
Massively better results as soon as the next day.
I want threads to work so badly compared to Twitter that I’ve happily just written a guide to train Metas algorithm. That’s how badly Twitter has dropped the ball.
Same here. The algorithm seems to be really bad at recommending relevant content on the 'for you' style timeline, and barely seems to take my likes and follows into account at all...
I didn’t notice any difference until the last month or so, and now it’s not ordinary users posting thirst traps, but obvious bots “looking for love” liking and retweeting my posts seconds after I post them. It’s a different problem than Instagram, but it’s still pretty bizarre considering Musk’s complaints that pre-acquisition Twitter had too many bots.
This sounds all true, but on Twitter - I never had the issue before so it seems more than a coincedence.
In fact, I used to be very surprised to read that Twitter had been previously widely used for porn adverts. I was kind of deluded into not even thinking about it. But scratch the surface and it was there. But it never appeared in my daily use.
Prior to the takeover I had been oblivious to it. After the takeover, the surface was scratched and what had previously been "underground" really made itself known in my twitter feed.
That isn't what made me leave the platform, but just my observation at the time.
This is great! Some of the listed Oregon airfields aren't foo far from me, so I might have to pay them a visit.
I'm an avid researcher of abandoned locations and their histories. There is nothing quite like immersing yourself in the past, especially if you can stand in the location for yourself. Imagine all the paths that intersected there; the lives that were changed. Even the smallest trace you leave behind may be rediscovered by someone in the future, and they may wonder what your life was like. They may even be connected to you in some way. Perhaps this trace is what leads them to discover that connection.
I understand and even applaud their decision to grow carefully and deliberately. At the same time, I have been on the waitlist for a year or so and I am eager to try it out.
Of all the Twitter competitors, I find Bluesky the most intriguing. I love the idea of managing my own feeds and connecting my own domain. There is a sense of control and privacy that I find appealing. It remains to be seen if Bluesky can deliver on its vision, but I look forward to experiencing it for myself.