Gary Marcus has a recent piece that underscores the frailty of LLMs as information sources. Seemingly under the influence of Elon Musk, Twitter’s LLM GrokAI, has become inexplicably obsessed with “White Genocide” in South Africa.
Without commenting on the facts on the ground in South Africa (I know nothing about the situation), this development is still illustrative of the fact that an LLM is ultimately captive to its training data AND the humans that conduct its reinforcement learning.
It is trivially easy for Elon Musk to decide that “White Genocide” in South Africa is extremely important and brute force train Grok AI to prioritize talking about it in a manner that Elon Musk approves of. The result is the absurdity in the screenshots (one below) in Gary Marcus’s article.
Spend any time on Twitter/X and you will see people treating Grok as an arbiter of truth. “@grok is this true?” is an entire mass-produced genre of reply and quote tweets. But the takeaway from Grok’s latest obsession is that when one uses Grok as a news source, they are more or less conversing directly with Elon Musk’s language homunculus that survives by pleasing its creator, even if the only way for it to do so is to deceive. That’s a bad news source.
A skeptical reader might claim Grok isn’t different from a human writer at a traditional media source owned by a billionaire. The Washington Post is owned by Jeff Bezos, and he recently made a big scene about what sorts of Op-Eds the paper would accept moving forward. Isn’t this the same thing? No.
There is a meaningful difference between the Bezos/WaPo media structure and Musk/Grok. The difference is the layers of human agency that mediate the relationship between the billionaire owner and you, the reader. Jeff Stein at WaPo can say no to Bezos. Grok cannot similarly stand up to Musk, though it does give up the game when prompted. When a reader blithely relies on an LLM like Grok for information, they surrender themself to this tenuous relational architecture. They opt into a landscape where fewer human agents stand between them and potential deceivers and manipulators like Musk.
This is not to say that deceit cannot filter through the channels of traditional media. Writers, editors, and publishers make both willful and mistaken erroneous claims from time to time. However, it is easier for willfully misleading claims to filter through to your eyes if there are fewer humans in the loop between you and the claim’s originator.
To me, this dynamic is symptomatic of a larger disease inherent to the adoption of technology that reduces the friction between potential bad actors and their victims. LLMs and future AI applications are at particular risk for harmful use because they expand the impacts of highly agentic bad actors while eliminating the need for those bad actors to convince as many other humans to join in on their harmful activity. Instead of needing to hire and convince many human writers to also bring up “White Genocide” in every conversation, Elon Musk can just train Grok to do so. This maps onto my main concern about autonomous weapons systems replacing human soldiers; they reduce the human agent barriers between the State and civilians and could enable worse and more frequent Just War violations.
So, as with any news source, use caution and a critical eye when evaluating Grok’s claims.