Key Takeaways
- AI is a tool, not a mystical force. Leaders must understand its true limitations and capabilities to use it effectively.
- Over-reliance on AI can be harmful. It can harm judgment, creativity, and diversity of thought if it's not balanced with human oversight.
- AI literacy can’t just live in IT. Everyone from leadership to design to ops needs to understand it to scale it responsibly.
AI is everywhere. We all kind of know what it is, but do companies truly understand it? The most likely case is that they don’t, especially the ones in the boardroom.
Unfortunately, too many executives treat AI like magic, like it’s a powerful and mysterious tool that’s largely out of their hands. That mindset, says Microsoft’s John Maeda, can be dangerous.
John is the VP of Engineering and Head of Computational Design and AI Platform at Microsoft. While he cannot talk specifically about his work at Microsoft, he did agree to share his opinion on how most companies are handling AI, and why that needs to change.
In our interview, John breaks down what most companies get wrong about AI, where the real risks are hiding, and how to build consistently without falling for hype.
Who Is John Maeda?
John Maeda is VP of Engineering and Head of Computational Design and AI Platform at Microsoft. A technologist with a background in design and computer science, he brings decades of experience in leading product experience and innovation initiatives across tech and business sectors.
AI Is Not Magic, It’s Math
One of the most common mistakes business leaders make is treating AI as some sort of magical force that can solve all their problems.
Well, it’s not, and John pushes back hard against that view.
“AI is often seen as unpredictable or magical. But it’s really just a new kind of computational energy, predictable within bounds,” John says.
That predictability comes from decades of foundational computer science, and AI is only creative when instructed to be.
“For business leaders to know how to ‘speak machine’ has been an important theme of mine, and I feel it’s especially important right now,” he warned.
Misunderstanding this can lead companies to trust AI too much or in the wrong ways.
Over-reliance that dulls human judgment is one of the first signs that AI isn’t being treated as a system, but as a crutch.
AI Changed How Work Gets Done, Not Why It Matters
AI’s speed and scalability are undeniable. It accelerates processes, reduces friction, and opens the door to new ways of working.
But when that efficiency starts replacing the messy, uncertain parts of creativity, it creates a new risk: sameness.
“AI is great at speeding things up and offering infinite remixability. But it risks making creativity formulaic.
Human creativity thrives in uncertainty and constraints, and AI can help remove friction, but we must not let it remove soul,” he says.
This is especially true in creative and brand-driven industries where emotional nuance and cultural relevance matter most.
Customer interactions will also be shaped by this shift. AI will drive how brands talk to users but the challenge is designing that interaction to add real human value.
“It’s easy to get caught up in the novelty, but the real win is creating value for the humans we serve.”
Poor AI Literacy Is a Bigger Threat Than AI Itself
The real disruption isn’t AI, it’s leaders who don’t understand it.
John emphasizes that surface-level awareness of tools isn’t enough. What companies need is deep, cross-functional AI literacy that goes beyond IT departments.
“AI literacy can’t just live in IT. Everyone from leadership to design to ops needs to understand it to scale it responsibly,” he says.
Without this, teams can fall into dangerous patterns, like deploying AI without questioning its inputs, ignoring model limitations, or failing to catch biases.
Watch our new podcast with Pavel Sher on AI.
“Echo chambers from training on synthetic data present a new kind of risk.
Beware of self-proclaimed AI experts that may dilute your understanding,” John warns.
He adds that leaders must also prepare their organizations for the emotional and cultural disruptions AI will bring.
“Displacement anxiety is real, so leaders must guide teams with greatest empathy.”
AI Will Reshape Operating Models and Customer Expectations
AI is already changing cost structures, workflows, and the way businesses make decisions.
In the next 12 to 24 months, John expects an even deeper shift.
“AI agents will act with more autonomy. AI will deeply integrate into decision-making and customer service.
Cost structures will shift. Authenticity will become a brand differentiator. New products, roles, and services will emerge from AI fluency,” John predicts.
With that comes increased pressure on companies to ensure traceability, ethical design, and strong standards.
“AI UX will improve: disclaimers, citations, traceability. AI literacy will become a competitive differentiator,” John says.
For businesses that rely on trust and reputation, that means customer-facing AI must be more than functional, as it needs to reflect the brand’s values.
The companies that win will be those that see AI not just as a tool, but as part of the customer experience itself.
To Future-Proof, Start Prototyping
John doesn’t advocate for AI hype, but he’s firm that inaction is riskier than change.
For businesses trying to future-proof, the first step is building internal fluency through experimentation and collaboration.
“Build AI literacy across the org, not just in IT.
Design with ethics from infrastructure to interface. Prototype with AI, don’t just strategize,” he says.
He encourages leaders to move beyond whiteboards and actually test systems in context.
And before anything scales, data needs a check.
“Audit your data and biases before they scale into problems,” John says.
Otherwise, businesses risk automating the very problems AI is supposed to solve.
Don’t Wait for AGI
While the idea of Artificial General Intelligence (AGI) captures headlines and imaginations, John keeps the conversation grounded.
The problem isn’t whether AGI will arrive, especially not now.
It’s that many organizations haven’t figured out how to use the AI that’s already here.
“AGI is an evolving concept. What matters most today is using the AI we already have responsibly and effectively,” he says.
He reminds leaders that AI has been embedded in everyday tools for years, but often invisibly.
“Don’t forget that everyone who has done a Bing or Google search was using AI.
Even before search became popular, you may recall that Microsoft Word would tell you if there might be a spelling error. That’s AI too,” he says.
For John, the bottom line is about mindset: adapt or risk irrelevance.
“Our world’s always been changing, so I always suggest people adopt the thinking of General Eric Shinseki:
‘If you don’t like change, you’re going to like irrelevance even less.’








