Misinformation and a Grok-Induced Bawaal
My Research on Misinformation Spotlighted by McGill Delve
McGill Delve — the thought leadership platform of McGill University's Desautels Faculty of Management — featured my research this week. The timing couldn’t be better, as Meta has just launched its Community Notes pilot in the U.S., where users are now tasked with fact-checking. The article’s author, Eric, did an excellent job translating my ideas into words. I’m sharing a couple of paragraphs from the piece below.
Until recently, platforms like X, Facebook, and YouTube have been hands-on in how they moderate content on their platforms. But a shift is in the air: some companies have begun outsourcing content moderation to the users themselves.
But does this kind of approach work?
“It sounds like a good idea,” said Sameer Borwankar, assistant professor of Information Systems at McGill University. “But there are many open questions.”
While I am focusing on platform users as moderators, Indian Twitter users are relying on Grok!
Grok-Induced Bawaal in India
Grok AI is creating bawaal on Indian social media—bawaal, a Hindi word, refers to a commotion, uproar, or heated controversy. With over 800 million active internet users in India, the buzz began when Twitter (now X) users started asking Grok AI about politicians, their statements, and unfulfilled promises. The AI’s sharp and often cheeky responses have been particularly applauded by critics of the ruling party. In a media landscape where trust in traditional news and fact-checkers is low, many users have started treating Grok’s replies as a form of alternative fact-checking. While the Indian government hasn’t officially responded to the AI’s answers yet, it is reportedly in touch with X regarding the use of foul language by Grok.
Other news:
Under the new administration, the National Institute of Standards and Technology (NIST) has directed scientists collaborating with the U.S. Artificial Intelligence Safety Institute (AISI) to eliminate terms like "AI safety," "responsible AI," and "AI fairness" from their objectives. The new focus is on reducing "ideological bias" and enhancing American economic competitiveness.