Headless does not mean designless
When a platform is mostly consumed by agents, design does not go away. It moves into contracts, defaults, permissions, feedback loops, tool descriptions, source health, and the parts of the product humans may never directly touch.
Designers are used to designing for a person.
We give that person a goal, a context, a job to be done, a level of confidence, a set of fears, a few constraints, maybe a name if the team still likes personas. Then we design the path through the product: what they see, what they understand, what they trust, what they do next.
That work still matters. I do not think human-centered design became old-fashioned because agents can click buttons, call APIs, write code, or summarize a dashboard.
But something is changing underneath it. More products are becoming platforms that are not primarily consumed through a human-facing screen. They are consumed by agents: fast, parallel, literal, persistent, and sometimes wildly overconfident. The product still has users. They are just not always looking at the UI.
The interface moved below the glass
A headless product can look invisible from a traditional portfolio lens. There may be no beautiful dashboard to photograph. No onboarding flow. No hero interaction. No tidy persona journey from awareness to activation.
But there is absolutely an interface.
The interface is the API contract. The schema. The tool description. The examples in the docs. The naming of an endpoint. The default value. The permission boundary. The retry behavior. The error message. The source label. The confidence score. The audit log. The moment the system decides to continue, pause, escalate, or ask a human.
If a human sees a confusing label, they might hesitate. They might ask someone. They might ignore it. If an agent reads a confusing label, it may confidently do the wrong thing one hundred times before anyone notices.
That is a design problem.
Human personas are not enough for agent populations
Traditional persona work assumes a relatively bounded human actor. The user has motivations, attention limits, emotional states, mental models, social pressures, and a context of use. We design for those things because they shape behavior.
Agent users have a different shape. They do not get tired in the same way. They do not skim because they are bored. They do not feel reassured by a well-composed empty state. They may call the same tool thousands of times, chain outputs into other tools, misread an affordance, overfit to an example, or treat missing context as permission to guess.
So the design object changes. We are not only designing a deterministic path for one human persona. We are designing operating conditions for a population of agent behaviors.
That means personas start to look more like capability profiles. Retrieval agent. Planning agent. Execution agent. Reviewer agent. Support agent. Research agent. Coordinator agent. Each one needs different permissions, context, failure handling, and evidence standards. And behind all of them is still a human steward who needs to understand what happened and why.
In a headless platform, naming is interaction design
I have become slightly obsessive about names because agents make sloppy naming expensive.
A vague field name is not just an implementation detail. It is a little product decision that will be reused by every downstream workflow. A tool called update_status might sound harmless until nobody knows whether it updates a draft state, a customer-visible state, a planning state, or a compliance-relevant state.
The same is true for data objects. If the system has a weak noun, the product will eventually inherit weak behavior. Agents need clear nouns, clear verbs, clear scopes, clear preconditions, and clear consequences. Humans need those too, but humans are more likely to notice when something feels off.
Headless design has a craft layer. It is just a quieter craft layer: structured names, crisp descriptions, examples that do not teach the wrong behavior, and defaults that guide the system toward the safest useful action.
Docs become product surface
In a screen-based product, documentation is often treated as support material. In an agent-consumed product, docs are part of the runtime experience.
The model reads them. The tool caller uses them. The engineer copies from them. The agent framework may turn them into available actions. The examples become patterns the system repeats.
This makes documentation a design medium. Not the boring afterthought kind. The actual product surface kind.
A strong tool description should answer the same questions a good interface answers. What is this for? When should I use it? What should I never use it for? What inputs are required? What evidence should I check first? What happens next? What does success look like? What are the failure modes? When should a human be pulled in?
That is UX writing. That is product design. It just happens to be read by both humans and machines.
Evals are usability testing for agents
If agents are real users of the platform, then evals become a kind of usability testing.
Not because agents are people. Because the platform has to prove that its instructions, contracts, permissions, and feedback loops produce reliable behavior under pressure.
A human usability test might ask whether a person can complete setup without getting lost. An agent usability test might ask whether a planning agent chooses the right tool, respects the review gate, preserves provenance, refuses an unsafe shortcut, recovers from a missing field, and explains what it did in a way a human can audit.
The questions are different, but the design instinct is familiar. Where does the user misunderstand the system? Where does the system make the wrong thing too easy? Where is the recovery path? Where does confidence exceed evidence? Where does the product need to slow down?
The AI-native designer should be comfortable moving between both kinds of testing.
Trust moves from persuasion to instrumentation
For a human-facing interface, trust is often expressed through hierarchy, language, confirmation, progressive disclosure, and visual clarity. Those still matter when a human is in the loop.
For an agent-facing surface, trust has to be more structural. Provenance. Source health. permissions. versioning. Trace logs. confidence thresholds. reversible actions. dry runs. human review gates. clear authority boundaries. evidence attached to outputs.
This is where design leadership gets very real. It is not enough to say the system should be trustworthy. You have to decide where trust is earned, where it is displayed, where it is recorded, and where the agent is not allowed to act without more evidence.
The product has to make good behavior the path of least resistance for users that do not have human hesitation as a built-in safety feature.
The human journey is still the anchor
I do not want a future where designers ignore humans because agents are the immediate consumers of the platform. That would be a category error.
Agents act on behalf of humans, teams, businesses, and communities. The human journey still tells us what matters. What risk is the person trying to reduce? What outcome are they accountable for? What judgment should never be silently delegated? What does the person need to understand after the agent has acted?
The difference is that the journey now includes invisible stretches. A user makes a request. Agents gather context, call tools, transform data, check policies, generate a plan, ask for review, execute a step, leave a trace, and update the system. The human may only see the beginning and the end, but design has to shape the middle.
That middle is where a lot of product quality will live.
What this asks of designers
Designers need to get more fluent in the materials of headless experience: schemas, APIs, tool contracts, event logs, prompts, evals, permissions, observability, and source-of-truth systems.
Not because every designer needs to become a backend engineer. Because if the product is being used by agents, those materials are part of the user experience.
We need to ask different critique questions. Is the tool description too broad? Can this action be taken without enough context? What happens when the source is stale? Does the agent know when to stop? Can a human reconstruct the decision? Are the nouns in the data model the same nouns the user would recognize? Are we designing for one happy-path assistant, or for many agents operating at once?
This is still design. It is just design with fewer surfaces to decorate and more systems to make legible.
My bias
I think headless platforms are going to expose which teams have been treating design as styling and which teams have been treating it as product judgment.
If design only means the visual layer, then yes, a headless platform can appear to need less of it. But if design means shaping how a system is understood, trusted, used, constrained, recovered from, and improved, then headless platforms need more design, not less.
The agent user is fast. The agent user is many. The agent user will amplify whatever the product makes easy, clear, ambiguous, or dangerous.
That is the AI-native lens for me: design is no longer just the interface between a human and a product. It is the interface between intent, data, models, tools, judgment, and the people who remain accountable when the system acts.