top of page

Should Deloitte Be Blamed for the AI Debacle? Let’s Rethink.

  • Writer: AIMS
    AIMS
  • Oct 14
  • 4 min read

Updated: Oct 15

I was recently chatting with a group of friends about Deloitte refunding a part of $440,000 after submitting a report to Australian government with fake quotes and made-up references.


One group said, “See? AI isn’t reliable.”The other said, “That’s not AI’s fault — Deloitte misused it.”

And honestly, both reactions had some merit.But somewhere deep down, after the conversation, I felt that neither really explained what happened.


ree

I tried to put myself in their shoes.If I were on that project — as a consultant at Deloitte or anywhere — I can imagine how it could have been: tight deadlines, client pressure, and a push to “use AI” somewhere in the process to justify it to management in all this noise, I probably would have thought, “Let’s use AI to save time.”But with little understanding of how AI actually works, the debacle was almost inevitable.


The real issue here is that people are turning to courses teaching “20 tools in 20 days” or following influencers selling “magic prompts.”And the reason so many flock to these courses is simple — there’s this underlying feeling that they need to catch up.That AI is moving so fast, they must learn faster.But in the process, they’re skipping the basics — the foundational understanding of the probabilistic nature of AI.


The Real Gap: Deterministic Thinking in a Probabilistic World


That’s why this Deloitte story matters.A lot of people are making the mistake of applying deterministic thinking in a probabilistic world.


For decades, we’ve worked with tools that gave us certainty

you ran a query — you got the right answer or an error message.

built a formula — it calculated correctly or it broke.

That’s deterministic thinking. And it served us well.


But AI isn’t deterministic. It’s probabilistic, AI doesn’t “know” things the way you and I do.


Which means:

  • It gives you the most likely answer, not the definitely correct one.

  • It fills gaps based on patterns, not facts.

  • It sounds confident even when it’s guessing.


And if you don’t understand that — if you keep expecting AI to work like Excel or a database — you’ll keep running into expensive mistakes.


Like Deloitte , Like the lawyers who submitted briefs with fake case law case, like every manager who trusted AI output without understanding how it was generated.


What the Deloitte Team Could Have Done


If the team had understood how AI works — that it’s probabilistic, not deterministic — they would have known to:


  1. Verify citations before publishing.(AI can invent sources that look real.)

  2. Ask AI to explain its reasoning.(“Where did these citations come from?”)

  3. Build validation checkpoints.(Never trust the first output — especially on high-stakes work.)


Not because they needed to be AI experts —but because they needed to understand how the tool thinks.


And once you understand that — once you stop expecting certainty and start working with probability — AI stops being a risk and starts becoming a remarkable collaborator.


What can you learn from this


If you’re using AI in your work right now, try this:


1. Ask AI how confident it is : Before you trust an answer — especially on something important — ask:“How confident are you in this? What could be wrong?”If it admits uncertainty, dig deeper.That one question catches most problems early.


2. Look for the logic, not just the answer : Don’t just take the output — ask AI to show its work.If the reasoning makes sense, the answer is probably solid.If the logic feels shaky, or if there’s no clear reasoning, verify before using it.


3. Treat AI outputs as drafts, not finals : wouldn’t let an intern send a client deliverable without reviewing it first.Same with AI.Use its output as a starting point — not the final word.


The Bigger Picture


AI is probabilistic — it doesn’t give fixed answers . To get better results, you need basic AI literacy.Not magic prompts. Not secret hacks.But logical thinking. The right context. Staying curious. Focusing on outcomes.


And that can’t be built in a day — or through one course . It requires a mindset shift.

Two years ago, AI felt like it belonged only to coders and computer science engineers, Now it’s entered every profession — and everyone’s scrambling to “catch up.”


But here’s the thing: even Sam Altman doesn’t know exactly where this going , you not that far behind. The mistake isn’t being confused . It’s looking for shortcuts.


What you need instead is to build your basics and develop your AI mindset.That’s the capability that will matter most.


The single most important skill of the future won’t be coding or prompting —it will be working intelligently with AI.


---------------


This is part of my ongoing work helping managers build an AI mindset and literacy — practical ways to think, decide, and work in the age of AI. It's not about mastering tools, but understanding how to work intelligently with Artificial Intelligence.






Comments


  • YouTube
  • LinkedIn
  • Facebook
  • Twitter

Greenhouse Ventures Pvt Ltd 

© 2024 All Rights Reserved

bottom of page