Generative AI is like being handed an industrial-grade power tool. It hums with possibility, gleams with promise and seems to whisper: build faster, build bigger, build smarter. But it also comes with sharp edges, unpredictable quirks and warning labels most people skip past.
So should I be worried? That’s the wrong question.
With generative AI, both extremes are dangerous. Too little worry and companies rush in blindly - approving pilots they don’t understand, exposing sensitive data or chasing buzzwords with no strategy. Too much worry and they freeze, leaving the tool in its box while competitors learn how to use it.
What’s needed isn’t panic or passivity, but constructive criticism built on a clear focus: how is AI aligned with the business strategy and how could it create value for the firm? That clarity should be the leader’s primary focus and to make the right choices, leaders must ask:
What are we genuinely good at and where are our gaps?
Where can AI materially strengthen our growth engine by adding value to business models in ways that translate into sustainable returns for shareholders?
Should weaknesses be addressed through AI, workforce development or inorganic moves such as acquisitions or partnerships?
Which activities are mission-critical for our people to own and which can be safely automated without eroding trust or value?
Is our AI journey best driven organically (capability build, internal projects) or inorganically (M&A, acquihires, partnerships with AI-native firms)?
The Reckless Leader
Some leaders are dazzled. They have seen a demo, heard a consultant’s pitch or read that AI is “the next electricity.” So they nod along to initiatives without knowing what’s inside the box. A pilot is approved here, a vendor contract signed there. The tool is switched on but no one’s wearing safety goggles.
The consequences are real: staff pasting sensitive data into public models, customer-facing bots hallucinating offensive responses, executives announcing “AI strategy” without explaining ROI. It’s like authorising a new financial product without understanding how it will appear on the balance sheet. Recklessness doesn’t just waste money, it damages trust with regulators, investors, customers and employees alike.

The Frozen Leader
At the other extreme are the leaders who dismiss AI as hype. They delay the conversation because it feels technical or unfamiliar. The instinct is understandable - why risk brand damage over something untested?
But freezing carries its own risks. Competitors are already learning how to wield the tool, slowly and imperfectly. Meanwhile, AI-native startups are rewriting business models overnight, unencumbered by legacy systems or slow governance. They don’t just compete on efficiency - they are resetting customer expectations entirely. Talent gravitates to the firms experimenting, not the ones stuck in 2019. Customers notice when rivals offer faster, smarter, AI-assisted services, while your processes look sluggish by comparison.
After all, caution that hardens into denial can leave you stranded.

The Balance Point: Constructive Critic
So where should leaders stand? In the middle, where criticism sharpens judgment.
That starts with remembering AI is a tool, not a replacement. Unlike a spreadsheet, it’s probabilistic: the same input may yield different outputs. It can draft fluent text one moment and invent facts the next. Used well, it can perform repetitive tasks like summarising reports, checking writing, drafting code, freeing people for higher-value work. Used blindly, it spreads errors, bias or overconfidence at scale.
Constructive criticism means leaning in - getting exposed to the technology through hands-on demos, not glossy vendor decks. Leaders don’t need to understand transformer architectures, but they should see how models hallucinate, how easily prompts can be manipulated, and how quickly costs escalate.
They also don’t need to buy into extremes. What matters is staying grounded in your scenarios. Where could AI safely augment what your people already do? Where would it introduce unacceptable risks? This technology demands tailoring: it isn’t an off-the-shelf miracle, it must be adapted to your strategy, your data and your customers.

There’s another layer: the implications for work itself. Which tasks will be automated? Which skills will rise in value? How do we retrain and support people through the transition? In finance, AI may draft compliance reports but cannot sign them. In law, it can summarise contracts but not advise clients. The line between drafting and decision-making is critical and it’s a responsibility that sits firmly at the top to anticipate these shifts, not just manage the risks.
In the end, it’s about ensuring innovation happens safely, credibly and in a way that builds trust rather than undermining it. AI isn’t going away. The only real choice is whether leaders approach it with reckless enthusiasm, paralysing fear or constructive criticism. One wastes money, one wastes time. The third creates space for strategy, resilience and competitive edge.
Maybe the real worry isn’t the tool itself. It’s whether you have the courage to pick it up, learn it and decide how it should and shouldn’t be used.
