<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=6493652&amp;fmt=gif">

Identifying High-Impact AI Agent Opportunities

Inteq Agentic AI Executive Briefing - Session Two

James Proctor
James Proctor
Subscribe

Updated:

Published:


This is the second briefing in Inteq's agentic AI series. The focus here is identifying high-impact AI agent opportunities inside your business processes: agents that optimize how work gets done across the enterprise. Personal agents are a separate topic.

Three areas to cover. First, business processes in the age of agentic AI, which is a fundamental change in how we organize and optimize work. Second, four markers for identifying high-value business processes that are strong fits for agentic AI. Third, five anti-patterns to watch for as you evaluate opportunities.

What Is an AI Agent?

In a tight definition: an AI agent is a piece of software. Reasonably sophisticated software, but software. It perceives its decision-making and operational environment, given the right context. It reasons about what it observes. It can make decisions autonomously. And it either acts on those decisions to achieve goals, or it serves the decision up to a human-in-the-loop reviewer.

The defining characteristic is the degree of autonomy. The agent operates without step-by-step human instructions for every action it takes. That autonomy is what makes it different from the automation we have been deploying for the last twenty years.

A 20-Year View: From Enterprise Systems to Agentic AI

Quick context on where we have been and where we are. Twenty years ago, and still today, enterprise systems do their job, and they do it well. They are transactional. Enter the order, check credit, route to fulfillment, pack and ship, create the bill of lading, invoice the customer. Verb-noun work. That foundation continues going forward.

About ten years ago, robotic process automation came into view. Many of you are doing RPA today, and you are going to keep doing RPA. There is nothing wrong with that.

To understand where RPA fits, and where agentic AI fits, it helps to split the atom of work activities. Work in any business process splits into two main categories: mechanical, rules-based work, and knowledge-and-judgment-based work. Those are the two polar extremes, with all the gradients in between.

RPA addresses the mechanical, rules-based end. If you can script it, if it is repetitive rules-based work, banging out widgets, RPA is a fantastic technology. The question that often comes up: if we move to agentic AI, do we have to scrap our RPA investment? Absolutely not. RPA stands on its own as excellent technology for automating rules-based, mechanical, repetitive work.

Agentic AI is a third layer on top of that. We are moving from deterministic automation to decision guidance, giving the agent the opportunity to think about what needs to be done and to make decisions about it. Do not confuse the two. You do not want to use AI agents to replace RPA. They address different parts of the automation stack.

The Mindset Shift

This is a real mindset shift. For decades, we tried to eliminate judgment from business processes. Build it into the rules, encode it, automate it. Push the judgment out.

We are changing that. Where we have rules, automate them with RPA. But step back and ask a different question: where are we making decisions? Because we are moving from a task-flow workflow to a decision-based workflow. With agentic AI, we embrace, celebrate, and guide knowledge-and-judgment-based work rather than trying to engineer it away.

Four Markers for High-Impact AI Agent Opportunities

With that foundation in place, here are four markers for identifying business processes that are strong fits for AI agents. Each is covered in detail below, with an example.

Marker One: High Decision Density

A process with high decision density requires many decisions per transaction or case. These are not rules-based gates, where if-this-then-that branching gets you to the next step. They are decisions requiring knowledge, judgment, interpretation, contextual reasoning, and application of business policy, rather than simple data lookups or binary branching.

What it looks like: five to ten non-trivial decision points in a single transaction. Non-trivial meaning the decision requires knowledge, judgment, expertise, and context. Experienced staff, those with three or more years in the role, are significantly faster than newer hires. There are pages of business rules documentation, decision trees, and decision-aid materials built up over time to help the decision-makers do their work. When you see those signals, you are looking at a candidate for agentic AI.

Insurance claims adjudication is a classic example. Each claim requires multiple coverages to be understood and applied, eligibility requirements to be evaluated, liability and payout decisions to be made, incident details to be assessed, and various jurisdictional regulatory requirements to be honored. Adjudicating a claim requires a whole package of knowledge and a lot of decisions. Some of them are policy-driven, but understanding the policy itself takes judgment.

Marker Two: Knowledge-Intensive and Judgment-Based Work

There is some overlap with Marker One here. The defining characteristic of this marker: knowledge workers must synthesize information from multiple sources, pulling from databases, documents, and reference materials, and apply organizational and domain-specific knowledge to it. They have to interpret ambiguous inputs, because a policy document does not always read clearly or stay current. They reason, apply professional judgment, and produce an output that is a reasoned conclusion, not a mechanical transformation of data.

Look for: tasks where the output is a recommendation, an assessment, a classification, or a decision, not data entry, and not deterministic. Long training ramp-up times for new workers. Multiple systems, documents, and reference sources consulted in the course of the work.

The interesting one, and a powerful tell, is tacit or tribal knowledge. Much of the decision-making is based on knowledge that is not documented anywhere. The longer people do the work, the better they get at it, because the know-how is not in a policy manual or a database. It develops in a small group of ten, fifteen, twenty people doing the work. Policy documents tend to lag what is actually going on in the field, so the team builds up its own deep, tacit understanding of how decisions get made. When that is the situation, you have a strong agentic AI candidate.

Contract review and assessment is a great example. You are reviewing commercial agreements with standard terms, and everyone negotiates the standard terms. You are scanning for deviations, assessing risk, deciding what to accept, what to push back on, what to negotiate differently, and what additional modifications to recommend. The higher the contract value, the more clauses, the more conditions, and the deeper the knowledge and judgment required.

Marker Three: High Volume with Variability

This one sits in a particular sweet spot. The process handles high volume of transactions, cases, or interactions, and there is enough variability that the work requires some level of human judgment. But not so much judgment that it requires years and years of specialized experience.

Why it is a sweet spot: with high volume, having a human in the loop for every transaction gets expensive. But you still need someone making decisions, because the work cannot be reduced to rules.

What to look for: high transactional volume with sufficient variance in handling time per transaction, perhaps a 3x spread between simple cases and complicated ones. A meaningful long tail, thirty to forty percent of cases, that requires custom or novel handling not in the documents and not covered by the rules. Organizations staffing for peak volume, leaving people underutilized off-peak. Chronic backlogs, or service-level agreements that force you to maintain enough human capacity to meet variable volumes.

HR case management is a great example. Inquiries come in continuously across benefits, payroll, leave, accommodations, and policy questions, each with its own level of complexity and sensitivity. Agents can handle a significant portion of that work. Not all of it. RPA handles the routine, rules-based slice. Agents handle the larger middle ground that requires knowledge, judgment, and expertise. And in HR specifically, you will still want a human in the loop, probably around twenty percent of cases, for the more sensitive issues, even with strong context provided to the agent.

Marker Four: Significant Process Latency

This is my personal favorite of the four. The signal is significant end-to-end cycle time. A process takes a week, two weeks, longer, but the work is not inherently slow. It sits in human queues waiting to be picked up.

When the work is finally picked up, the touch time is short. Five to ten minutes on this part of the case, ten or twenty minutes on the next, maybe thirty on a downstream review. But each of those touches sits in a queue for a day or two before someone gets to it. The thing moves through the process slowly because it is spending most of its life waiting, not being worked.

The metric is process efficiency, sometimes called the touch-time ratio: actual work time divided by total elapsed time. As a rule of thumb, less than twenty percent is the marker. In practice, I find it is often much worse. I have seen 100:1 ratios. A week of elapsed time with thirty or forty minutes of real work in it.

You will see this if you look for it. Five or more sequential reviewer handoffs is a tell, since every handoff introduces queue time. SLA-driven urgency combined with capacity constraints. High variability in work-in-progress flowing through the process.

Employee onboarding is a clean example. Many onboarding steps run sequentially. Background check first; if it clears, provisioning, then benefits enrollment, then various manager approvals along the way for any deviations. Onboarding can be lengthy. Not because of hands-on work, but because of waiting time across the queues.

Value vs. Readiness

Take these four markers back into your work environment. Walk your processes against the descriptions, and you will find tremendous opportunities for AI agents. Agents do not solve everything, but in this kind of process, they solve a lot.

One important caveat: high potential value does not equate to readiness. A process can be a strong agentic AI candidate on these markers and still not be ready for agentic AI deployment. Readiness is its own analysis, and it is the topic of the next briefing in this series.

Five Anti-Patterns to Watch

Once leadership gets excited about agentic AI, it is easy to start evaluating opportunities through a biased lens. Five anti-patterns come up consistently. Automation bias. Volume obsession. Technology push. The perfect-process fallacy. Scope-creep optimism.

Anti-Pattern One: Automation Bias

Automation bias shows up when you only identify rules-based, mechanical, repetitive tasks. That is RPA territory, not agent territory. If you are not looking past it, you are going to miss the knowledge-and-judgment-intensive work, which is where AI agents actually live.

Selecting processes for agentic AI where traditional RPA already works adds cost and only marginal value, if any. Agents cost more than RPA for two reasons. First, an agent is a sophisticated piece of software that has to be maintained. Second, agents typically interact with a large language model, usually a frontier model such as Chat GPT, Gemini, or Claude, or an internal small language model. For every decision the agent makes, it expends tokens, and tokens have a budget cost.

That said, token costs are coming down quickly, and the reasoning quality is getting genuinely good. The line between agents and RPA is graying. There is a growing middle band of work that used to live in RPA and is now economically viable for agents. The step-change value, though, remains in selecting knowledge, judgment, and decision-based work. If you are only looking for rules-based stuff, you are missing the bigger picture.

Anti-Pattern Two: Volume Obsession

If a process has high volume, the instinct is to automate it. Fair enough. But the volume itself is not the deciding factor. If the high-volume work is mechanical and rules-based, automate it with RPA. If the high-volume work involves decision-making and judgment, automate it with agents. Use RPA where you use RPA. Use agents where you get value from agents.

You can have high volume with low cognitive complexity, and that is RPA territory. You could have fifty thousand transactions per week of simple rules-based work, and another process with only two thousand transactions per week that requires significant cognitive judgment and experience. The smaller-volume process will return far more value from agents, because it sits in high-paid knowledge-worker space.

A related point: knowledge work is not going away. AI agents will handle a lot of it, no question. But humans stay in the loop. There is a concept called Jevons paradox: as the cost of something falls, volume rises. As we deploy more agents, total case volume goes up, which means there are more exceptions, more edge cases, more situations that need human attention. The narrative that knowledge workers are about to be eviscerated is, in my view, way overblown. That is a different discussion, but worth flagging.

Anti-Pattern Three: Technology Push

When all you have is a hammer, the world looks like a nail. Most major enterprise systems and SaaS platforms have built agentic AI into their products by now. There is nothing wrong with that. If you have a big SaaS enterprise platform, the agent capabilities are built in, and you should absolutely use them.

The catch: you do a lot of things outside those platforms, and the platforms can only reach so far across that boundary. Vendor agentic AI also creates lock-in, and you have to work within the vendor's templates. You cannot go outside them. Independent agent platforms can fill that gap, and so can in-house builds.

Somewhere in your roadmap, you will need to make a deliberate decision: vendor platforms, third-party platforms specialized in agentic AI, in-house builds, or some combination. Each has pros and cons. The anti-pattern is letting the platform you already own drive the evaluation, rather than the work to be done.

Anti-Pattern Four: The Perfect-Process Fallacy

This is the assumption that your current process is optimal, or close enough, to start layering agents on top. The result is automating a flawed process.

I have been doing this work for a long time, across traditional enterprise systems, RPA, and now agentic AI. Every process I encounter has significant process debt. You want to clean up that process debt, ideally in parallel with the agentic AI work, since the requirements work and the process cleanup can run together and meet in the middle.

Do not tell yourself the process is good enough. Be honest with yourself, because the move to agents is a fundamentally different kind of process design. We are transitioning from task-based workflows to decision-based workflows. If the process is not ready to accommodate that shift, you will not get the outcomes you are looking for.

Anti-Pattern Five: Scope-Creep Optimism

This one shows up as overly ambitious framing. "We are going to automate all of accounts payable. We are going to run it lights-out." You are not. At a minimum, you will have humans in the loop at different points. RPA will be doing real work in places where it is the right fit. And there will be edge cases that require human reasoning.

Some processes can get close to lights-out. I would put that at maybe ten percent of business processes. The other ninety percent will not.

Practically: map the process. Identify the parts that fit the agent patterns. Scope the work. Unscoped agent opportunities cannot be designed, built, deployed, or measured. Scoping happens at process level three or level four, down to the task and action level. That is the same depth you would go for an RPA implementation. Until you are at that level, you cannot define what needs to be done or assemble the specification package the build will require.

Closing Thoughts

Two things to take with you. First, the four markers, namely high decision density, knowledge-intensive and judgment-based work, high volume with variability, and significant process latency, are practical filters you can apply immediately to surface high-value AI agent opportunities in your own organization. Second, the five anti-patterns, namely automation bias, volume obsession, technology push, the perfect-process fallacy, and scope-creep optimism, are the traps that derail agentic AI initiatives even when the opportunity selection is right.

The next briefing in this series turns to readiness. A process can be a strong agentic AI candidate on the four markers and still not have sufficient readiness to move forward with development. We will cover what to look at to assess process and data readiness for agentic AI.

* * *

Related Posts:
The Agentic AI Ontology Question
Data, Meaning, Reasoning and Agentic AI
The PR/FAQ Is a Scoping Document - Not a Specification
Spec-Driven Development Starts with Model-Driven Analysis

Related Consulting Services:
Agentic AI Readiness & Strategy Analysis
AI Agent Opportunity & Portfolio Design
Business Process Mapping
Process Improvement & Reengineering

Related Training Courses:
Discovering Agentic AI Opportunities
Analyzing and Specifying AI Agent Business Requirements

* * *

 Subscribe to my blog

Visit my YouTube Channel | Connect with me on LinkedIn

Check out our business analysis Training Courses and Consulting Services

Contact us at info@inteqgroup.com