Hallucinations and Bias in AI: The Hidden Risks Behind the Magic
I use AI every day. It helps me summarize emails, organize my calendar, analyze M&A targets, and spot trends across our billion-dollar energy business. Sometimes, it even reviews my work—offering angles I hadn’t considered. At our company, we’ve also started using AI for predictive maintenance and in HR, helping us anticipate equipment failures and streamline talent processes. It’s like having a hyper-intelligent assistant who never sleeps and always has an opinion.
To summarize it: I’m a big fan of AI.
But here’s the catch: sometimes that assistant makes things up. And sometimes, it carries invisible bias.
That’s the paradox of artificial intelligence. It’s brilliant, fast, and transformative. But it also hallucinates sometimes. And it reflects bias because often the data it is trained on is biased. These aren’t just technical quirks—they’re leadership challenges. If you’re using AI to guide decisions, shape strategy, or streamline operations, you need to understand what’s under the hood.
Let’s unpack the magic—and the risks.
The Rise of AI in Executive Workflows
A few years ago, AI was a buzzword. Today, it’s an integral part of my daily routine. I use it to:
Summarize long email threads into actionable insights
Organize my calendar with context-aware prioritization
Analyze M&A targets across markets and sectors
Surface trends in energy pricing, regulation, and innovation, as well as in tax law
Review strategic documents to challenge my assumptions
Support predictive maintenance across our infrastructure
Help other departments to develop use cases, such as HR in screening, onboarding, and internal mobility
It’s like having a team of analysts, schedulers, and strategists on call 24/7.
But unlike humans, AI doesn’t know when it’s wrong. It doesn’t pause to say, “Wait, that doesn’t sound right.” It just keeps going—with confidence. That’s where hallucinations and bias come in.
Hallucinations: When AI Makes Stuff Up
Imagine asking your AI assistant to summarize a quarterly report. It returns a polished paragraph with numbers, trends, and a quote from the CEO. You check the report—and realize the quote doesn’t exist. The AI invented it.
This is a hallucination: when AI generates plausible but false information. It’s not lying. It’s predicting what should come next based on patterns in its training data. However, in high-stakes environments—such as energy, finance, or M&A—that’s a problem. Only theory? Unfortunately, not.
Real-World Hallucination: The $440,000 Deloitte Report
In July 2025, the Australian government commissioned a report from Deloitte to assess its “Future Made in Australia” compliance framework. The price tag? $440,000 AUD. The twist? The report included AI-generated content riddled with serious errors.
Among the issues:
Fabricated academic citations
False references
A quote wrongly attributed to a Federal Court judgment
The AI model used—Azure OpenAI GPT-4o—had filled documentation gaps with plausible but incorrect details. University of Sydney academic Dr. Christopher Rudge flagged the errors, calling them classic AI “hallucinations.” Deloitte later acknowledged the use of generative AI and agreed to refund the government partially.
The corrected version removed fictitious references and updated the disclosure of the methodology. But the damage was done. The episode sparked calls for stricter AI usage clauses in future consultancy contracts.
This wasn’t a minor slip—it was a high-profile reminder that even top-tier firms can fall prey to AI’s confident inaccuracies. And when the stakes involve public policy, welfare systems, or legal frameworks, hallucinations aren’t just embarrassing—they’re dangerous.
Bias: The Invisible Hand in AI Outputs
Bias in AI isn’t always obvious. It’s not just about race, gender, or politics. It’s about patterns—what data the model was trained on, how it interprets your prompt, and what it assumes you want.
Subtle Bias: The Leadership Style Trap
I once asked an AI tool to describe “effective leadership in M&A.” It returned a list of traits—decisiveness, assertiveness, and control. All valid. However, missing were empathy, listening, and cultural sensitivity—traits I’ve seen drive success in South America, certain countries in Asia, and Europe.
The bias wasn’t malicious. It reflected dominant narratives in Western business literature. But it shaped the output—and could’ve shaped strategy.
Bias in Strategic Analysis
When I use AI to analyze M&A targets, I often ask it to compare cultural fit, innovation potential, and leadership style. But I’ve learned to watch for bias. Some models overemphasize financial metrics, underplay human dynamics, or assume Western norms. That’s not just incomplete—it’s misleading.
The Leadership Challenge: Trust, Verify, and Guide
So what do we do? Stop using AI? Not a chance. The solution isn’t avoidance—it’s awareness. Here’s how I approach AI as a strategic leader:
1. Sanity Checks: Trust but Verify
AI is fast, creative, and helpful. But I never take its output at face value. I cross-check facts, validate sources, and ask follow-up questions, especially in M&A, where a single incorrect assumption can result in millions of dollars in losses.
2. Be Specific in Your Communication: Frame Your Prompts Wisely
Bias often starts with the prompt. If you ask, “What’s the best way to lead a merger?” you’ll get a generic answer. If you ask, “How do empathetic leaders navigate post-merger integration in Latin America?” you’ll get nuance.
Prompt engineering isn’t just technical—it’s strategic.
3. AI Supports: Use AI as a Sparring Partner, Not a Soloist
AI excels at generating ideas, creating first drafts, and recognizing patterns. But final decisions, narratives, and strategies? That’s human territory. I treat AI like a brilliant junior partner—full of ideas, but needing guidance.
Building a Smarter AI Culture: From Curiosity to Competence
When we first introduced AI at our company, it was met with curiosity—and a bit of skepticism. Could it really help us predict equipment failures before they happen? Could it support HR without losing the human touch? Could it analyze M&A targets without missing the nuance?
The answer is yes—with caveats.
We don’t just plug in a tool and hope for the best. We built a culture around it. That means training teams to spot hallucinations, encouraging cross-functional reviews of AI-generated insights, and creating protocols that treat AI outputs as drafts—not gospel.
One of the most significant shifts occurred when I began comparing outputs from different models. It wasn’t about finding the “best” AI—it was about surfacing blind spots. When one model emphasized financial metrics and another flagged cultural misalignment, we knew we were onto something. The real insight came from the tension between perspectives.
AI isn’t just a tool. It can become a sparring partner. And like any good partner, it can challenge us to think deeper, act smarter, and lead better.
Final Chapter: The Magic Is Real—So Are the Risks
AI is here to stay, whether you love or hate it. It’s transforming how we work, think, and lead. From predictive maintenance to M&A analysis, from calendar optimization to strategic review, it’s woven into the fabric of modern leadership.
But like any powerful tool, it demands responsibility.
Hallucinations and bias aren’t bugs—they’re features of prediction-based systems. They serve as reminders that intelligence without judgment is incomplete. That speed without scrutiny is dangerous. That magic, without mastery, can mislead.
The future of AI isn’t just smarter models—it’s smarter users. Leaders who understand the risks, teams who challenge assumptions, cultures that treat AI as a partner—not a prophet.
So let’s lead with clarity. Let’s use AI boldly, wisely, and humanely.
And let’s never forget: the magic is real. But so are the risks.