Crypto Trends

Here’s How They’re Lying To You About AI Agents

Anyone telling you that AI Agents will change the world soon is lying.

I’m finding it funny watching everything come full circle (again) and back to business processes. I can’t wait to see AI influencers start using old BPM and organizational improvement acronyms.

PPT — People, Process, and Technology
POLDAT— People, Organisation, Location, Data, Applications and Technology
DMAIC — Define, Measure, Analyse, Improve, Control

Process Mining, BPMS, Dynamic Case Management, TOGAF, ITIL, MDM…the list is endless when it comes to old-school digital transformation that you kind of have to understand and be through the pain barrier with before you get anywhere near close to chucking an AI into the mix on top of a messy architecture.

None of these were obsolete; they were forgotten because some shiny new trend came along and sold them as a panacea, and AI Agents are just another wave that’s doing the same. I really would be very scared if your CIO or CTO started waving a Salesforce Agent RFP at you excitedly.

If 𝘺𝘰𝘶 don’t understand how your business operates at a process level, how can you expect an AI to? If you’re letting loose a bunch of AI agents to perform tasks autonomously, then where are all the touchpoints defined so it knows when to stop and where the limits of its autonomy end? Stuart Winter-Tear shared a paper on this earlier; it really is like inviting Chaos Theory into your company and expecting it to turn out positive.

Or do you wake up one morning and you’re locked out of the system because overnight the HR Agent deemed you unworthy to be CEO because Jimmy the Customer Care Bot can do a better job?

You’re all rushing headlong down a fast road in a kit car you built and forgot to install the brakes.

Before you even begin to talk to any “expert” or vendor pushing an AI agent platform, benchmark your org against this first.

The APQC Process Classification FrameworkThe APQC Process Classification Framework

At the very minimum, your business processes, business rules, and aligning data should be defined to a reasonable standard that an AI system can understand and follow in any sense of autonomy and know where the boundaries and touchpoints are for handing off internally, interacting with customers or clients, and how and when to end a process rather than carry on endlessly spawning more tasks and smaller agentic actions ad infinitum.

I’m not affiliated with APQC (I spoke at their events many moons ago), but they do have a load of industry frameworks and decades of work and knowledge in this area worth tapping into. And if Accenture can use their work and pass it off as their own, then you can save yourself a large consulting bill and just do the homework yourself.

If this page from one of the AQPC Process Classification Frameworks looks scary, then it should because potentially, that’s how many AI agents you might have running around handling 𝘰𝘯𝘦 process end to end.

There will never be just one all-singing and all-dancing AI that orchestrates and executes processes and tasks across the enterprise. Agents will be both horizontally and vertically aligned according to industry or functional needs.

And that also means multiple Agentic platforms need to integrate in order to complete one task or process; there are already startups that are just picking away at specific problem areas but not holistically looking at the bigger organizational picture.

So, what you end up with are not sustainable cost savings and productivity improvements but a hack-job architecture of AI Agent platforms mixed with existing systems built upon another integration stack that ties them together (this, in itself, is another area for VCs to pump money into — AI orchestration platforms — once the promised land fails to deliver manna from heaven in the form of savings).

So, again, anyone telling you that AI Agents are going to change the world soon is lying. They simply don’t understand the work involved in transforming even one process, let alone hundreds.

And then consider this scenario: years down the line your company is being lined up for a merger or acquisition, and your enterprise is rife with AI agents. But so is theirs. Who wins?

Having been subjected to mergers and divestitures before at phenomenally stupid scales, it’s not an easy or painless process. In fact, in many instances, the wrong decisions were made and the wrong business entity’s processes were chosen as the future state to move towards.

Vanilla migrations never run smoothly.

So what happens in the future when companies merge and they both have different agentic platforms running the show? Because like it or not, this is far more complex than choosing one set of applications over another, or one business operating model over another.

Now, you have to think about what data both businesses have and how to train the chosen agent platform and LLMs (or whatever future incarnation you rely on), what processes, risks, and compliance models to choose as guardrails, and how to allow integration for those AI agents into the remaining applications and systems left behind (you know, the ones that the acquiring organization doesn’t have a like-for-like comparison and therefore is critical for the business to function even though you don’t want it…)

Having an AI platform doesn’t make things easier, they make things far worse and the likelihood is it is going to take far longer to complete.

𝘛𝘩𝘦 𝘒𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦-𝘊𝘳𝘦𝘢𝘵𝘪𝘯𝘨 𝘊𝘰𝘮𝘱𝘢𝘯𝘺, 𝘐𝘬𝘶𝘫𝘪𝘳𝘰 𝘕𝘰𝘯𝘢𝘬𝘢𝘛𝘩𝘦 𝘒𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦-𝘊𝘳𝘦𝘢𝘵𝘪𝘯𝘨 𝘊𝘰𝘮𝘱𝘢𝘯𝘺, 𝘐𝘬𝘶𝘫𝘪𝘳𝘰 𝘕𝘰𝘯𝘢𝘬𝘢

Some final thoughts.

Apparently, ‘Serendipity’ is one of the hardest words to translate in the English language. Maybe that in itself is a happy accident, but what has this got to do with AI?

Well, in an organizational context, the happy accident has been used in successful Japanese businesses by the ability to create knowledge not by processing information but rather by “tapping the tacit and often highly subjective insights, intuitions, and hunches of individual employees and making those insights available for testing and use by the company as a whole.” (𝘛𝘩𝘦 𝘒𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦-𝘊𝘳𝘦𝘢𝘵𝘪𝘯𝘨 𝘊𝘰𝘮𝘱𝘢𝘯𝘺, 𝘐𝘬𝘶𝘫𝘪𝘳𝘰 𝘕𝘰𝘯𝘢𝘬𝘢)

Now, everyone and their avatar wants to convince you that you can replace people with this tacit knowledge, and serendipity, with just a bunch of AI agents running around executing tasks between applications. As Nonaka discussed, it will just process information; it will not act on a hunch, it will not create subjective insight, it will not take a leap of faith, it does not have intuition, and it will certainly not discover by happy accident.

If anything, an AI agent will be like your worst and most pedantic Subject Matter Expert because over time you’ll find yourself uttering “…𝘣𝘶𝘵 𝘈𝘐 𝘩𝘢𝘴 𝘢𝘭𝘸𝘢𝘺𝘴 𝘥𝘰𝘯𝘦 𝘪𝘵 𝘵𝘩𝘪𝘴 𝘸𝘢𝘺” when facing change, and that’s when you realize that removing people and serendipity from your organization was no happy accident at all but by rigid design.

And who would ever question whether an algorithm would get things wrong or whether it’s not the right way to do business?!

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button