For most of my career, I designed inside containers.
A product existed. A company program existed. An onboarding flow existed. The organization defined the container, and my job was to design the best experience inside it.
Over the past year, AI changed that for me in a way I didn't expect. Not just because it made certain parts of my work faster, but because it changed what I can own.
As AI tools make building systems dramatically easier, designers are no longer as limited to shaping experiences inside existing containers. Increasingly, we can help build the container itself. And when that constraint starts to disappear, the scope of what designers can own changes with it.
That shift has made me notice three patterns I think many designers are starting to feel: what's declining, what's becoming the new standard, and what might be emerging next.
Declining: Constrained Influence
In many organizations, design influence starts relatively late.
The business defines the problem and often the terrain where the solution is expected to live. It might sound like: "We need more AI training," or "We need a new onboarding experience." By the time design enters, the container is already defined. The designer's job is to make the experience inside that container as effective and usable as possible.
And if designers want to influence earlier, we typically rely on artifacts: decks, frameworks, mockups, documents. Those artifacts can be powerful, but they rely on something fragile: imagination.
You're asking people to picture what something might feel like. And if they can't quite see it, the response is often polite but vague: "Hmm, maybe."
That's usually where ideas stall.
For a long time, influencing strategy meant getting very good at explaining ideas. But when the cost of building drops, you don't have to explain as much. You can show. And when people can experience an idea instead of imagining it, the conversation changes.
A New Standard: Experience as the Pitch
I don't think every idea needs a fully built experience. But I do think the standard for how we build belief is changing.
For ideas that are novel or hard to grasp, it's increasingly possible to reduce ambiguity earlier by letting people experience something instead of just imagining it.
A normal request that raised bigger questions
I started seeing this shift in early 2025. At HubSpot, we had just launched an AI 101 training to help everyone get up to speed on the basics. After that, leadership asked us to define AI fluency levels and start building AI 201.
On paper, that sounded like a normal next step. But when I started thinking about it, something felt off.
We didn't actually know what AI fluency looked like inside the company yet. We just knew people were experimenting with AI in very different ways. Some were drafting content. Some were analyzing data. Some were automating workflows. Some were building tools. And what "fluency" looked like for a marketer might be very different from what it looked like for someone in product, support, or analytics.
But we only had anecdotes. We weren't measuring or tracking those differences.
If we jumped straight to a generic training program, we'd mostly be guessing, and probably overlooking the good work already happening in pockets of the company we couldn't yet see. So the real problem wasn't building the next course. The real problem was visibility.
In order to help employees level up, we first needed to understand how they were actually using AI in their work.
A different way forward
So I proposed something different. Instead of defining AI fluency first, we would build a system to discover it.
Employees would bring one real example of how they used AI at work. The system would ask about the tools they used, the goal, the approach, and the outcome. From that story, it would generate a shareable AI persona and a few coaching suggestions to help them grow.
It would be fun and useful for the user. But behind the scenes, every interaction would also generate structured data about how AI was being used across the company.
So instead of guessing what fluency meant, we could learn from real behavior.
The idea didn't land until it was real
I pitched that idea. I had slides. I had mockups. The response from stakeholders was interested, but not convinced.
And honestly, if I had stopped there, the idea probably would have died in the deck.
Stakeholders liked the goal of collecting more data. But a personality quiz? How was that different from a survey? Would it actually feel useful? Would it feel silly? It was a quirky idea, and it was hard to understand through explanation alone.
So instead of continuing to debate it, I built the system.
Not because anyone asked me to, but because now I could.
And when I say "built the system," I don't mean a quick prototype. I designed a conversational flow that collects real AI examples, logic that maps those stories into behavioral personas, coaching that adapts based on what the user shares, and a data layer that lets us analyze patterns in how AI is being used across the company.
In other words, this wasn't just design artifacts anymore. With the help of AI, it became a working product.
The experience changed the conversation
Once people could actually go through the experience themselves, the reaction changed completely. They could answer the questions, see their persona, get coaching suggestions, and feel the concept instead of trying to imagine it.
The conversation moved from "I'm not sure" to "Oh, this is cool. We should run with this."
So I kept going: testing with users, iterating, and bringing in the right partners to help scale and secure it. We launched it internally. Employees responded well to it and continue to use it. And the data from that system is now helping shape our AI enablement strategy going forward.
Emerging: Building the Container
That experience stuck with me, because it didn't just help sell an idea. It revealed something deeper.
The idea itself is something I probably wouldn't have proposed a few years ago, not because it wasn't interesting, but because it would have required building an entirely new system. Historically, that meant needing engineering resources, roadmap space, and organizational alignment before the idea could even exist.
If the best solution required a new system, it often wasn't worth proposing.
But with AI tools, I could build the core experience myself. And that changes the equation.
When the cost of building drops, designers aren't just shaping solutions inside containers anymore. We can create new containers entirely: new tools, new workflows, new products, new systems.
That expands the solution space.
I think a lot of designers know this feeling. You can see the better solution, but the better solution would require a new workflow, a new tool, a new layer of logic, or support you probably won't get. So the idea gets edited down. You make it fit the existing container.
And sometimes that is the right call. But sometimes it means the best idea never gets proposed at all.
Designers have spent a long time saying, "If only we could build X." AI is starting to remove that excuse.
The designer's job isn't just "How do we improve this thing?" It can also become "Should this thing exist at all?" And if not, "What should exist instead?"
Ownership Doesn't Mean Doing Everything
This is usually the point where some designers get nervous.
When people hear that designers can build systems now, the reaction is often: "Wait, are we supposed to do everything?"
I don't think that's the shift.
Ownership doesn't mean soloing. It means carrying the thread further.
In my case, I built and tested the system. But bringing it to scale required partners. I worked with a developer to get the system onto our internal AI platform so we could safely store each interaction. I also brought in our Analytics team to do a more thorough, expert analysis of the raw data to make sure we were seeing the full picture.
That matters. As needed, you should bring in the people who can help make something scalable, secure, and maintainable.
So the new pattern isn't necessarily "designer does everything." The pattern is that designers can now initiate and develop ideas much further before needing that support. It means staying accountable for the outcome, including the quality bar, and knowing when to bring in the right partners to meet it.
More Ownership Needs Broader Judgment
AI changed something else, too. The ability to build systems is no longer limited to engineers. Product managers can build things. Marketers can build things. More and more people can stitch together working systems now.
That's exciting. But it also means the quality of those systems depends heavily on the thinking behind them.
Designers are trained to think about interaction, feedback loops, edge cases, and human behavior. So if designers step into building systems, we have both an opportunity and a responsibility to bring that design judgment with us.
Not just to build things quickly. To build them thoughtfully.
Design judgment is part of what makes designers well positioned for this moment. But if we want to own more, we also need to have the broader judgment that ownership demands: product sense, prioritization, and the ability to decide what's worth building in the first place.
The Opportunity I'm Seeing
AI didn't just make designers faster. It removed a constraint.
For a long time, designers shaped experiences inside containers that other parts of the organization defined. But as the cost of building drops, designers can increasingly create the container itself.
That expands what we can imagine. It expands what we can test. And it expands what we can own.
For some designers, that might mean influencing strategy earlier. For others, it might mean building and launching entirely new systems. And sometimes, it might even change the role you end up playing in the organization. It certainly did for me.
The question isn't just what designers can design anymore. It's what we're willing to take ownership of bringing to life, and whether we're willing to build the container when the right one doesn't exist yet.