Designing for States, Not Screens: What OpenAI's Agent Push Means for UX

For me, two announcements stood out from OpenAI’s 2025 DevDay:
- Apps inside ChatGPT, which allow developers to embed fully interactive apps within the conversational interface.
- AgentKit, a toolkit for composing, deploying, and evaluating agents with connectors, workflows, and visual authoring tools.
Looking at them together, they point to a single idea:
OpenAI is building the operating system for agentic interaction. Apps and tools will no longer live beside the agent; they’ll live inside it.
From Screens to States
In the good ol’ days of traditional UX, we designed flows, pages, and transitions between static views. Agentic systems design has replaced that with something more fluid: State.
Every step of an agent’s reasoning, every handoff to a tool, every change in context, every uncertainty, is a state.
I think we all know our job as designers today isn’t just to make things look good or just flow smoothly; it’s to help people understand what’s happening and give them clear ways to shape or redirect it.
Picture a travel planning agent (like the one OpenAI is demoing for Agent Builder).
A user might say: “Plan a three city trip across Europe in July with museum stops and a tight budget.”
The agent gets to work, comparing destinations, mapping itineraries, and balancing cost and timing.
Then the user pivots: “Actually, drop Berlin, add Lyon instead.”
The interface doesn’t reset. It updates in place.
The map shifts, suggestions adjust, and a friendly note confirms the change, you’re still in your “trip plan,” just with a new twist.

That’s stateful design: preserving continuity, staying transparent, and helping people feel in control even as the system evolves behind the scenes.
Designing for Agentic Continuity
When conversation, computation, and UI are all fused - UX becomes the bridge between what’s understood and what’s visible. Our work shifts from arranging layouts to really making the invisible visible, giving users a sense of where they are in the process and how to steer it.
OpenAI’s direction reinforces the focus we’ve already had on human agentic communication. Component systems need to:
- Show dynamic modes clearly. Components like chat bubbles, tool cards, or tables should signal whether the agent is thinking, executing, paused, waiting, or handling an error.
Agentic continuity is about keeping the conversation human, fluid, forgiving, and clear. It also sets the stage for how we think about collaboration. As the agent becomes a participant in the experience, designers must make its reasoning, actions, and boundaries visible so people can stay oriented and involved.
Grounding in Human Agent Design Principles
At Outshift, we’ve been exploring how humans and agents can collaborate through our HAX principles: Control, Clarity, Recovery, Collaboration, and Traceability.
They’re not theoretical; they’re the foundation for making this new agentic world usable and trustworthy.
| Principle | What It Means | How It Applies Now |
|---|---|---|
| Control | Users should always feel in charge | Every agent action, from invoking a tool to booking a service, needs clear stop, cancel, or edit options. Users should steer, not spectate. |
| Clarity | The system’s intent and state should be obvious | Replace vague "thinking…" messages with clear indicators of what’s happening and why. Context builds confidence. |
| Recovery | People need easy ways to fix mistakes | Let users roll back, retry, or tweak input without losing context. Treat recovery as part of the flow, not an afterthought. |
| Collaboration | Agents are partners, not assistants | Build for co-creation: ask clarifying questions, show reasoning, invite input. It’s a dialogue, not a command line. |
| Traceability | Every decision should be explainable | Give people visibility into what data, tools, and reasoning paths were used. Transparency is how agents earn trust. |
Together, these shift the UX to become a shared workspace where humans and agents think and act together.
Where OpenAI Is Headed and What Designers Need to Consider
OpenAI’s roadmap, built on AgentKit, ChatKit, and the Apps SDK, lays the foundation for a new interaction model where state, not screen, defines the experience.
For designers, that means:
- Building state aware component systems that adapt as the agent works.
So, as OpenAI pushes apps and tools to move inside the agent, it reinforces that we need to always design for contextual understandability so users can follow what’s happening and maintain trust through shifting states, not screens.