← Writing
Thoughts 11 min May 3, 2026

Making Room for Agents in Expert Tools

One thing I keep noticing in expert tools is how quickly the conversation around AI collapses into interface patterns.

Add a side panel. Add a chat assistant. Add a prompt box.

But that is not really the design problem I see.

In expert products like security consoles, network operations platforms, and financial terminals, what users rely on is not just the visible interface. Over time they develop a feel for the product’s deeper logic: where signal tends to show up, where proof lives, how interpretation starts to form, and which paths hold up when the pressure is on.

The tool becomes more than a set of screens, it becomes a learned structure for how the work gets done.

That is what makes agent integration hard for these expert tools.

Adding an agent that is supposed to interpret, prepare, suggest, remember, and sometimes act is not like adding one more feature. It means introducing a new participant into a working environment that already has an established logic, built over years through trust, repetition, and expert fluency.

I have seen a version of this dynamic before. When I led design at Infor, working across large enterprise platforms being moved from on-prem products to a broader cloud suite, I dealt with deeply ingrained expert users all the time. Their expertise was often one of the best sources of insight, but it could also become a major bottleneck to change. I used to call it “the tyranny of the super-user.”

That tension matters again now with AI integrations. The people who know these systems best are often the first to push back on change, not because they are resistant by nature, but because they understand what even small disruptions can cost once a product has become part of how their work gets done.

In many products, the first answer to AI integration has been to slap a chat assistant to the side of the interface. That’s probably safe, but not enough for an expert system.

We need to start tackling the hard work of how an agent should actually participate in the work. What should it be allowed to see? When should it step forward? How should it connect its output to evidence? How does it hand work back? And how do you make its role clear enough that the product still feels dependable to the people who know it best?

I keep coming back to the same idea: expert tools always have a grammar. Not a visual style, but a deeper structure of orientation, evidence, action, and trust. If agents are going to enter these environments well, they have to map onto that grammar.


The grammar of expert tools

Every expert product has a logic that only really becomes visible once someone has spent enough time inside it. There are stable anchors that help people stay oriented, places where they know evidence lives, and places where interpretation starts to form.

Expert users know the shortest paths for moving between the two. They understand the grammar for moments where speed matters, where verification matters, and where judgment cannot be abstracted away.

That’s the grammar I’m talking about. It’s the structure that lets expertise take hold inside the product.

Once you look at expert tools that way, the design problem shifts from adding an assistant that answers questions to creating an interface for a working relationship, with context and user judgment at the center.

So I explored that through four small design experiments across security, network operations, policy administration, and financial position management.

These aren’t meant to be full-on product designs, they’re just vibe-designed ways I am trying out to test a specific question:

How can an agent participate in work with an expert without displacing the structure that already makes that expert's work possible?

Bonus points for not making it just a chat assistant.

1. Security investigation: the agent as case-building partner

Topic
Security investigation console
The learned structure
Analysts move between alerts, timelines, raw evidence, entity relationships, and case notes to build a defensible understanding of what happened.
Agent role
A case-building partner that helps frame the incident, surface plausible hypotheses, connect them to evidence, and suggest next pivots.
What changes
The analyst gets faster support in building and revising a working theory.
What does not
Raw evidence stays visible, uncertainty stays open, and the analyst still owns the case.
Why it matters
This is the clearest example of an agent participating in expert work without asking the user to reorganize their workflow around it.

The first concept sits inside a security investigation console, where the division between evidence and interpretation is central to the work. An analyst moves across alerts, timelines, entity relationships, raw logs, and case notes, slowly building a case from incomplete and often messy signals. The problem is rarely that the data is missing from the tool. The problem is stitching it into something coherent without losing track of why one clue matters more than another.

The agent enters as a case-building partner. It flags the alerts that appear most relevant, surfaces a few plausible hypotheses and links them back to specific evidence, highlights correlated clusters on the timeline, traces a likely attack path through the entity graph, and drafts a case narrative directly inside the analyst’s notes as the picture sharpens. When a flagged alert is expanded, the agent explains what it thinks is happening, how confident it is, and where to look next.

What matters is that it does not replace the investigation process. It does not hide the logs behind a clean summary or turn a working theory into a finished answer. The analyst still needs to inspect the underlying material, challenge the framing, reject the wrong hypothesis, approve or discard the drafted note, and keep the case open long enough for uncertainty to stay visible. The agent is participating in the work with the expert, not presenting a parallel version of it.

The case framing lives inside the alerts queue, the timeline, the entity graph, the log viewer, and the case notes the analyst already works in. There is no separate surface asking the analyst to step outside their investigation to consult the agent.

In this experiment, the agent helps frame the case, connect evidence, and suggest next pivots while the core investigation structure stays intact.

  1. 1
    Expanded alert context. The flagged alert opens inline to show the incident narrative, ranked hypotheses, and suggested next pivots. The analyst gets a working theory without leaving the alerts queue.
  2. 2
    Correlated cluster on timeline. The agent groups related events and highlights the cluster visually. This surfaces the burst pattern the analyst would otherwise need to spot manually across scattered timestamps.
  3. 3
    Attack path overlay. A suggested traversal path drawn over the entity graph in the analyst's own evidence view. It connects the nodes the agent believes are part of the same chain, giving the analyst a starting thread to verify or reject.
  4. 4
    Hypothesis tags on log lines. Small markers linking individual log entries back to the hypotheses they support. This keeps the connection between raw evidence and interpretation visible without hiding the logs themselves.
  5. 5
    Unresolved questions. Open gaps the agent cannot fill are surfaced explicitly. This keeps uncertainty visible rather than burying it behind confident-sounding summaries.
  6. 6
    Draft case note. The agent prepares a draft summary directly inside the case notes panel, where the analyst already writes. Approve, edit, or discard are all one action away.

2. Network operations: the agent as triage guide

Topic
Network operations console
The learned structure
Operators watch a health table, alert stream, topology view, and device detail panels to diagnose failures and determine blast radius under time pressure.
Agent role
A triage partner that groups correlated signals into incident clusters, labels affected devices, and surfaces what to check next.
What changes
The operator gets a starting interpretation of which alerts are the same problem and how far the damage extends.
What does not
Individual alerts stay visible, the operator still consoles into devices, verifies groupings, and determines root cause.
Why it matters
A single upstream failure can produce forty downstream alerts. The agent helps separate root from noise without collapsing the detail the operator needs to confirm it.

The second concept sits inside a network operations console, where the core challenge is making sense of cascading failures fast enough to act on them. An operator watches a health table, an alert stream, a topology view, and device detail panels, trying to figure out which of the dozens of signals firing at once are actually the same problem. The data is almost always there. The difficulty is that a single upstream failure can produce forty downstream alerts, and separating the root from the noise takes pattern recognition under time pressure.

In this one, the agent enters as a triage partner. It groups correlated signals into incident clusters, labels the devices that belong to each group, overlays those groupings on the topology, and flags what to check next. When the operator expands a flagged device, the agent explains why it thinks these signals connect, what the estimated blast radius looks like, and what it is not confident about yet.

What matters is that it does not take over the triage process. It does not auto-resolve incidents or hide the individual alerts behind a rolled-up summary. The operator still needs to console into the device, verify the grouping makes sense, decide whether two clusters are related or independent, and determine the actual root cause. The agent is offering a starting interpretation, not a conclusion.

The triage guidance lives inside the health table, the alert stream, and the device detail panels the operator already reads. There is no separate surface asking the operator to context-switch into an AI view.

Here the agent supports triage inside an operations console rather than replacing the operator's map of the system.

  1. 1
    Expanded triage context. The flagged device opens inline to show why the agent grouped these signals, what the blast radius looks like, and what to check next. The operator gets an incident interpretation inside the health table, not beside it.
  2. 2
    Correlation markers on alert stream. Each alert is tagged with its incident group. This lets the operator see at a glance which alerts are part of the same cascading failure and which are unrelated.
  3. 3
    Cluster overlay on topology. The agent draws the incident boundary directly onto the topology view. The operator can see which part of the network is affected without switching to a separate map or summary.
  4. 4
    Unresolved questions and confidence. Open uncertainties and confidence levels sit inside the device detail panel. The agent says what it does not know alongside what it does, keeping the operator grounded in what still needs verification.

3. Policy administration: the agent as consequence layer

Topic
Policy administration tool
The learned structure
Admins edit rule conditions, manage exceptions, navigate dependency chains, and route changes through approval workflows.
Agent role
A consequence analyst that explains rules in plain language, flags conflicts, traces exception spillover, and surfaces the governance burden of the current configuration.
What changes
The admin can see what a change will do downstream before committing to it.
What does not
The admin still owns every decision: resolving conflicts, scoping exceptions, and sending changes through formal approval.
Why it matters
The danger in policy work is not making a bad rule. It is making a reasonable rule that quietly breaks something three policies away. The agent makes that distance visible.

Lets look at a policy administration console concept, where the work is defining rules, managing exceptions, and navigating the dependency chains that connect one policy to the rest of the system. An admin edits conditions, scopes, and approval workflows, often without a clear picture of what happens downstream when something changes. The danger is not making a bad rule. The danger is making a reasonable rule that quietly breaks something three policies away.

In this concept, the agent enters as a consequence analyst. It explains what a rule does in plain language, flags conflicts with other policies, traces exception spillover through the dependency chain, and surfaces the governance burden of the current configuration. When a conflict exists, the agent shows exactly which rules contradict and what would happen under different resolution paths.

What matters is that it does not make policy decisions. It does not auto-resolve conflicts or silently adjust scopes to avoid problems. The admin still needs to approve or reject changes, decide whether a conflict is acceptable, determine whether an exception should be scoped more tightly, and send the change through the formal approval workflow. The agent is showing consequences, not choosing outcomes.

The analysis lives inside the rule definition, the exceptions panel, the dependency view, and the change controls the admin already works in. There is no separate surface presenting the agent’s perspective as a finished recommendation.

In policy-heavy tools, the role of the agent is consequence visibility without loss of precision.

  1. 1
    Conflict marker on policy list. A small flag on the list item itself, visible before the admin even opens the rule. This puts the signal where scanning happens, not buried in a detail view.
  2. 2
    Inline rule annotations. Plain language explanation, dependency warning, and conflict alert appear directly below the rule definition. The admin sees consequences while editing, not after.
  3. 3
    Spillover and impact in exceptions. The agent traces how an exception cascades through dependent policies and shows what would change if it were removed. This makes unintended inheritance visible at the point where exceptions are managed.
  4. 4
    Governance burden in change controls. A summary of the approval complexity, exception count, and downstream exposure sits inside the change controls panel. The admin can gauge the cost of a change before entering the approval workflow.
  5. 5
    Dependency chain visualization. The agent draws the policy dependency graph inside the dependencies panel, with the conflict highlighted in red. This makes the structural relationship between policies visible rather than requiring the admin to hold it in memory.
  6. 6
    Recommended actions. Concrete next steps span the bottom of the workspace. These are suggestions, not decisions. The admin still owns every action, but the starting point is clearer.

4. Financial position management: the agent as signal interpreter

Topic
Financial position management
The learned structure
Traders monitor a position book, order blotter, risk metrics, price chart, and news feed, watching for moments where conditions shift enough to require attention.
Agent role
A signal interpreter that flags positions where conditions have changed, surfaces correlations between names, tags news with the positions it affects, and annotates risk when exposure shifts.
What changes
The trader gets earlier visibility into connections between positions, news, and risk that would otherwise require manual cross-referencing.
What does not
The trader still evaluates every signal, decides whether the thesis has changed, and maintains full control over orders, stops, and position sizing.
Why it matters
The signals that matter most are often connections between things rather than any single data point. The agent surfaces those connections inside the panels the trader already monitors.

A financial terminal has one of the most deeply learned structures of any expert tool. Traders do not just rely on data density. They rely on a working feel for how positions, orders, risk, charts, news, and alerts fit together in the moment.

The challenge isn’t simply seeing information, it’s knowing what matters to this book, this exposure, and this decision while markets are moving.

That’s what makes this environment a good (and challenging) test for agent integration. A trader already has a mental model of the market, the portfolio, and the live position set. If an agent is going to help here, it can’t just replace that model with a generic summary of what the market is doing. It has to add interpretation that is specific to the trader’s context.

In this one, the agent enters as a signal interpreter. It flags positions where conditions have changed enough to warrant a second look, surfaces correlations between positions that might not be obvious from the book alone, tags incoming news with the positions it affects, and annotates risk metrics when portfolio-level exposure shifts. When a flagged position is expanded, the agent explains why it matters now, what other positions are connected, and what actions might be worth considering.

What matters is that it does not make trading decisions. It does not move stops, resize positions, or execute orders. The trader still needs to evaluate the signal, decide whether the thesis has changed, choose whether to act or wait, and maintain final control over every order. The agent is surfacing connections and context, not expressing a view on direction.

The signals live inside the position book, the risk panel, the news feed, and the alerts table the trader already monitors. There is no separate surface asking the trader to step outside their workflow to consult the agent.

In this experiment, the agent acts as a signal interpreter, surfacing cross-position impact and possible actions while the terminal's core grammar and trader judgment stay intact.

  1. 1
    Expanded position context. The flagged position opens inline to show why it matters now, which other positions are connected, and what actions might be worth considering. The trader gets the full picture without leaving the book.
  2. 2
    Risk metric annotations. The agent annotates shifts in portfolio-level risk directly inside the risk panel. Beta drift, VaR changes, and concentration warnings appear where the trader already checks exposure.
  3. 3
    Position relevance tags on news. Incoming headlines are tagged with the tickers they affect and the direction of impact. Opacity signals relevance strength, so the trader can scan the feed and immediately see what matters to their book.
  4. 4
    Suggested stop adjustment. The agent flags a stop that may need tightening based on momentum and volatility, directly inside the alerts and stops panel. The suggestion is visible; the decision stays with the trader.
  5. 5
    Chart event annotation. A vertical marker ties a price inflection to the news catalyst that drove it. This connects the chart to the news feed without requiring the trader to cross-reference timestamps manually.

What these experiments suggest

This is just a rough exploration, but I think it starts to show a more useful way of thinking about agents in expert tools.

First, agents need a role, not just a place in the interface. The design problem becomes what function the agent serves inside the workflow, and whether that function is legible to the user. In these experiments I tried to give the agent a role, and show it as a case-building partner, a triage guide, a consequence analyst, and a signal interpreter.

Second, the strongest roles are usually bounded. The agent works best when it helps build understanding, preserve orientation, reveal consequence, or surface what matters in context. It works less well when it starts trying to replace the core logic of the tool itself. In expert environments, usefulness usually comes from fitting into the workflow, not trying to become the workflow.

Third, the agent has to stay close to the product’s learned structure. In every example, the value of the agent depends on its ability to work within an existing logic of evidence, interpretation, and action. It can’t just float above the tool as a detached intelligence layer. It has to draw from the same structures the user already trusts.

Fourth, evidence and judgment have to remain explicit. If an agent is going to interpret, suggest, or prepare work, its output needs to stay connected to visible evidence, system state, rules, or positions. Just as important, the boundary around user judgment has to stay clear. The user still needs to know where interpretation ends and decision-making begins.

That is what ties these concepts together. Expert products do not just need AI features. They need a workable relationship between human expertise and agent participation.

A few principles that fall out of this

So, a few principles seem to hold across all four experiments.

01

Preserve the anchors.

If parts of the product carry orientation for expert users, those structures should not move casually.

02

Add synthesis, not substitution.

Agents are strongest when they help connect fragmented work, not when they try to replace the product's existing logic.

03

Keep claims inspectable.

Suggestions, summaries, and proposed next steps need to stay tied to evidence, rules, positions, or visible system state.

04

Make the agent's role legible.

If an agent is acting with some autonomy, the user should understand what kind of role it is playing and why it is stepping in.

05

Protect user judgment.

The more capable the agent becomes, the more important it is to keep judgment, accountability, and handoff clear.

06

Work within the product's learned structure.

Expert tools already contain a trusted way of working. Agents need to map onto that structure rather than ask users to reorganize themselves around the agent.

I’m not saying this as a universal formula for every product, but for the systems where users have deep fluency, high stakes, and little patience for added ambiguity, these are good places to start.

What I’m left thinking about

For a long time, one of the hardest design problems in enterprise software was figuring out how to improve entrenched tools without breaking the trust and fluency expert users had built inside them. That problem is back now, but in a more demanding form.

The challenge is how to make room inside these products for this new AI participant in the work.

That takes more than attaching a chat assistant to the edge of the tool. It takes a clearer understanding of the learned structure expert users already rely on, and a more careful definition of the role an agent is actually meant to play.

I think the best AI integrations won’t come from treating the agent as a layer dropped on top. They’ll come from treating agentic participants as something that has to earn its place inside an established way of working.

To me, that feels like the real design task ahead of us: not just adding intelligence to expert systems, but figuring out what those systems need to become when expert humans are no longer working alone inside them.