In 2026: When AI Gets Real, Here's What That Actually Means for Managers
- AIMS

- Jan 30
- 5 min read
Updated: 7 days ago
Finally coming to terms with setting the work rhythm for 2026., Still recovering from the backlog of workload of 2025, the New Year holidays flew by, and now, asking myself a simple question:
What should be my writing theme for 2026?

Of course, there’s no shortage of AI noise. Every few weeks, something “breaks the internet”—Claude, Copilot, AI co-workers, agents doing everything. The hype cycle is relentless.
But what really caught my attention recently was a TechCrunch article on “In 2026, AI will move from hype to pragmatism” It was written largely from an enterprise lens, but one thought kept bothering me.
If enterprises are being told to move beyond hype, then what does that actually mean for managers?
The answer, surprisingly, felt very simple to me—because I’ve been training myself on it for a while now: We are using probabilistic tools with a deterministic mindset.
And that, in my view, is the root of many AI disappointments today.
Let me explain why. While working closely with managers in large enterprises, I’ve noticed a strange new problem emerging. After investing heavily in ChatGPT, GenAI tools, and internal AI platforms, teams are coming back saying:
“ChatGPT gave a wrong answer.”
“The agent isn’t performing.”
During one such conversation, I blurted out something almost instinctively:
"Only if your folks understand how AI works,
will they be able to work with AI"
The manager looked confused. Honestly, I was momentarily confused too. But the more I reflected on it, the more I realised—that sentence holds the truth of the problem.
Initially, I thought this was just my experience. Maybe it was the kind of teams I was working with.
But when I started digging a little deeper, I realised this isn’t anecdotal.
Gartner In a recent CIO survey, nearly 60% of enterprise leaders flagged reasoning errors and hallucinations as a top risk when scaling generative AI beyond pilots.
What teams call “ChatGPT gave a wrong answer” is what enterprises are formally calling hallucinations or reasoning errors. Different language. Same problem.
And when these errors start showing up in real work — decks, reports, recommendations — trust erodes quickly. Managers don’t have the luxury to experiment endlessly. So they fall back to what feels safe. Old processes. Old tools. Old ways of working.
My Theme for 2026
This was the moment of it clicked and some the changed , If I look ahead to 2026, this is what I want to focus on: Helping managers understand how AI works at a fundamental level — because beyond that, they are smart enough to figure out how to work with it.
The problem isn’t intelligence.
The problem is expectation.
Unless we understand how these systems actually work, we’ll keep expecting the wrong things from them. And that’s exactly where most of the frustration around AI is coming from today.
This article is the first in a short series on understanding how AI works. I’ll keep adding to it over time, building from fundamentals to application.
This first piece focuses on what I believe is the core issue behind most AI disappointments today.
Core Issue : Deterministic Thinking with a Probabilistic Tool
AI is probabilistic in nature, but we are using it with a deterministic mindset.
For decades, we’ve worked with deterministic systems. Excel, Databases, Business rules , search engine Same input. Same output.
So when people start using ChatGPT, they subconsciously expect the same behaviour.
They use it like Google Search.
They may ask:
“Give me the best answer”
“Tell me the correct strategy”
“What is the right decision?”
And then they get confused when:
The answer feels generic
The answer changes slightly each time
The answer sounds confident but doesn’t quite fit
So a lot of people conclude:
“AI is unreliable.”
But the problem isn’t AI.
The problem is using a deterministic approach to solve problems with a probabilistic tool.
ChatGPT is not retrieving facts like Google.
It is generating likely responses based on patterns.
That distinction changes everything.
Where This Goes Wrong
A manager asks ChatGPT:
“What is the best go-to-market strategy for my product?”
They are expecting a single, correct answer.
But there is no “best” strategy without deep context. So the model fills in the gaps probabilistically. It gives something that sounds right. Used this way, AI disappoints.
Another example:
“Should we enter this market or not?”
That’s a decision-level question. ChatGPT can generate arguments, but it cannot own risk, internal politics, or consequences. Treating its output as a deterministic answer is where mistakes happen.
Or this one — which I hear constantly:
A manager asks ChatGPT on Monday:
"Summarize the key risks in expanding to the European market."
Gets a decent answer. Saves it.
Asks the exact same question on Wednesday. Gets a slightly different answer. Now they're confused. "Which one is correct? How can I trust this if it keeps changing?"
But here's the thing — both answers are correct.
Because the model isn't retrieving a fixed fact. It's generating a likely response based on patterns. The underlying probabilities shift slightly each time, especially if the prompt is open-ended.
This isn't a bug. It's how the system works.
But if you're expecting deterministic consistency — same input, same output — this feels broken. And that's exactly the friction point.
So what does the right approach actually look like
Now flip the approach.
Instead of asking AI to decide, use it to expand judgment.
Ask:
“What are three possible outcomes if we enter this market?”
“What assumptions would make this strategy fail?”
“Give me a conservative, aggressive, and contrarian view of this problem”
Here, probability becomes an advantage. AI helps you explore the space of possibilities.
You stay responsible for judgment. And decisions still sit with you.
That’s the right collaboration model.
And this also explains something many people quietly wonder about.
Why does someone else use ChatGPT & get a much better answer than you?
It’s usually not because they know some magical prompts. And it’s not because ChatGPT suddenly became smarter for them. It’s because of how they are using it.
They’re not taking ChatGPT’s output as a decision. They’re using it to support their judgment.
They give more context. They clarify what they’re actually trying to solve. They iterate instead of accepting the first answer.
When you give ChatGPT better context and clearer prompts, you’re not “controlling” it — you’re helping it identify the right patterns to work from.
And because the system is probabilistic, better context shifts the probabilities in your favor.
Same tool. Different approach. Very different outcomes.
So next time you feel frustrated with ChatGPT, pause for a moment and ask yourself:
Am I applying a deterministic approach, or a probabilistic one?
That question sits at the core of the AI mindset.
What's Coming Next
This shift — from deterministic thinking to probabilistic thinking — is just the starting point. Because once you understand how AI works, the next question becomes:
What should you actually use it for? Here's where things get interesting.
AI is getting exceptionally good at prediction. But prediction is not the same as judgment. And judgment is not the same as decision-making.
Most people confuse these three. But in the AI era, knowing the difference is what separates managers who thrive from those who feel stuck.
In the next piece, I'll break down why the art of decision-making is becoming the defining skill of the AI era — and what that means for how managers need to think about their work.
-----
This is part of my ongoing work helping managers build an AI mindset—practical ways to think, decide, and work in the age of AI. It's not about mastering tools, but understanding how to work intelligently with intelligence. Looking forward to your feedbacks



Comments