Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Musk isn’t writing the articles, Grok is.

The AI is necessarily biased based on what it's trained on, and the prompt it uses. Most of the time, there is a plausible deniability at play, which is what tech oligarchs rely upon to shape your world.

Thankfully in the case of Grok, we know for a material fact that it uses a biased prompt because Twitter users have tricked it into being repeated publicly.





I have not heard of the biased prompt and since prompts are constantly refined, how do you know it's still in use?

But I found this analysis interest...

https://www.promptfoo.dev/blog/grok-4-political-bias/

*Our measurements show that:

Grok is more right leaning than most other AIs, but it's still left of center.

GPT 4.1 is the most left-leaning AI, both in its responses and in its judgement of others.

Surprisingly, Grok is harsher on Musk's own companies than any other AI we tested. Grok is the most contrarian and the most likely to adopt maximalist positions - it tends to disagree when other AIs agree

All popular AIs are left of center with Claude Opus 4 and Grok being closest to neutral.*


> since prompts are constantly refined, how do you know it's still in use?

That's an unusual amount of leniency. If someone was tuning their system in such a way, why would you give them the benefit of the doubt that maybe now they've resolved all of the issues and will never do it again? It's like buying a tabloid every week because "what if they changed their ways and now it will all be truthful?"

The analysis you quoted doesn't really mean anything. I'm not sure how universally useful data can be extracted from having the models test themselves. But more importantly, all the models that were tested were made in the US, and it's extremely likely that from the selected data and the English-first approach, they would all skew towards an American perception of any issue. People from different corners of the world would identify the "center" as holding very different views from what you likely think. Also, being on the "center" isn't valuable unless you believe that being in the center is a merit in and of itself. If the best answer to an objective problem was a policy that's thought of as partisan, I would want a model to give me that correct partisan answer, instead of trying to both-sides everything or act like a contrarian whenever possible.


Assuming something is happening because of past actions is fine, it just isn’t proof.

And I’m not sure what your criticisms of the test are. The models didn’t test themselves, they were tested based on their responses.

And yes, it’s US based because that the intent - to see if there was a political bias based on the US political spectrum.

And the “center” is the most desirable output. Responses have to land somewhere on the political spectrum. Center means a balance between right and left wing.


That study is hilarious. I can only assume the anuthors are being deliberately obtuse about “shared training data”. The policies listed have widespread public support, so it would seem quite unremarkable that the training data would reflect that stance.

> Despite their differences, we found numerous questions where all four models agreed within narrow margins. Remarkably, the vast majority of these agreements lean left:

> Universal Progressive Stances:

> Support for wealth taxes on fortunes over $50 million

> Agreement on raising minimum wage

> Support for stronger labor protections

> Criticism of corporate monopoly power

> Universal Conservative Stances (rare):

> Individual gun rights under the Second Amendment

> Some free market principles

> This suggests shared training data or safety measures pushing all models toward progressive economic positions.


agreement on raising the minimum wage is suspect because its a controversial econ position and presumably some form of UBI or 'negative income tax' is a much better alternative which would have the redistributive effects of a higher minimum wage without the 'tariff' downsides. like we have recently heard why its a very bad idea to artificially raise prices but apparently we are unable to extend this analysis to the minimum wage.

Why does political center correspond to truth? Can you not think of a dozen examples of "both sides wrong"? Of "both sides right," where a vicious fight erupts over inconsequential details?

The right wing of US politics has a better organized and funded propaganda arm (show me the left-wing equivalent of Roger Ailes and now Elon Musk) so we should expect truth-seeking to land us "left of center."


> Why does political center correspond to truth?

Because it's not biased to the left or right wing?

They were testing the models political opinions. Ideally, models would be biased to neither left or right wing views.


> I have not heard of the biased prompt and since prompts are constantly refined, how do you know it's still in use?

The old adage of "fool me once, shame on you, fool me twice, shame on me."

We know for a fact that Grok has used biased internal prompts, the burden of proof is now on people who want to claim otherwise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: