Do you count on AI tools to fact-check information? Do you assume your conversations with bots like ChatGPT are private? If you answered yes to either, it’s time to stay alert. The world of AI is evolving fast, and keeping up is essential. Subscribing to this newsletter is a great place to start.
Recently, I saw a discussion in a large WhatsApp group led by a popular podcast influencer. He claimed that LLMs no longer hallucinate and can be trusted because they now cite news sources. But just last week, Grok—Elon Musk’s AI assistant—got a fact-check wrong. That’s a red flag, especially given that over 6.7 million users reportedly rely on Grok to verify social media content.
I’ve previously recommended tools like Perplexity for quick fact-checking. And to be fair, AI still beats your WhatsApp uncle in most cases. But it’s not foolproof. These tools are fast, not flawless.
Even more concerning is the news that ChatGPT conversations were briefly appearing in Google search results. OpenAI says the issue has been fixed—but for those who value privacy, it’s yet another reminder to tread carefully. Read the news here.
In line with this theme, I recently attended a thought-provoking talk by Prof. Natalia Levina from NYU, where she discussed both the promises and the deeper risks of generative AI adoption in organizational settings. Her point? While everyone’s excited about the speed and scale of AI, we often ignore the risks quietly building in the background—like over-reliance on tools that still make factual errors, or assuming that our interactions with AI are private when they may not be. Some of the risks she mentioned for the organizations are as follows:
Human cognitive and knowledge-generation abilities may deteriorate as we rely more on AI to think for us
The accuracy and consistency of GenAI outputs cannot be guaranteed, even if the responses look confident and cite sources
Cultural norms—especially Western values—get amplified at scale, shaping global discourse in subtle ways
Power is increasingly concentrated in a few firms, raising concerns about control and accountability
Organizations quickly lose track of how individuals use these tools, leading to compliance and ethical blind spots