I’m a pretty optimistic person in general, so you should always approach my takes skeptically. But I do think everything is gonna be okay – mostly.
We all feel that content design in 2026 is in a transition phase. During our salary survey last year, we asked people how they felt about content design. It was pretty evenly split between people who are worried, and people who are excited about the future.
Speaking privately with content designers of all experience levels, I’m also struck by a renewed sense of optimism. It comes in two forms:
- People who recognize that content design has a role to play regardless of any new tech that comes along
- People who are embracing AI to do new and exciting things with content
Both of these approaches carry some truth. The same old problems exist with content: guiding users, creating fun and memorable experiences, etc. Using words as part of design.
It’s also true that despite those problems existing in the same ways, the methods we use to solve them have absolutely changed.
I teach AI workshops every month, and each time I hear from content designers across a wide range of organizations and teams. The feedback I hear ranges from, “I’m leading a team to evaluate output from AI tools” to “my company doesn’t let me use ChatGPT.”
And as the number of job ads that call for AI experience continues to grow, those without access to teams using these skills may feel some anxiety. I already know they are – they tell me that in our classes.
Some cool stuff is happening
It’s very, very easy to see that AI is in a bubble. Any time new technology comes around the hype wildly exceeds the grasp of the fundamentals, and it’ll come crashing back to Earth at some point. Of course, that doesn’t change the fact AI will find a permanent home somewhere in our work lives.
I’ve mentioned this before, but I found it pretty encouraging that OpenAI advertised for content design roles in 2025. These were senior, considered roles, embedded in teams shaping how AI systems are presented, constrained, and understood.
You can see a similar tension at Anthropic. On the one hand, there are claims that AI will handle the majority of coding tasks. On the other hand, the company’s outward-facing posture has leaned heavily into ideas like “Keep Thinking.” There’s a strong emphasis on how people reason with systems, not just what systems can generate. A lot of attention has been paid to the language choices in Claude’s interface: how it explains itself, how it frames uncertainty, how it responds when it can’t help, etc. That’s all content design!
I regularly speak to a lot of content design leads at some big companies. The ones working with AI are pretty excited about building tools so their teams can work on more interesting feature work. I’ve seen those tools first-hand. They’re excited because they put their direct reports onto more interesting work where colleagues can actually see content people as collaborators.
Not to mention that content designers are taking it upon ourselves to create some cool tools that incorporate AI. I’d recommend checking out this blog post from Jeremy Hoover, or this Claude skill from Christopher Greer for examples.
But there’s a risk here.
Content’s biggest risk is becoming a bottleneck
Are you aware of how much AI is changing coding?
I’d encourage you to have a peek around LinkedIn or other social media platforms for engineering related posts. I’ll save you the trouble and link you to this recent talk from Andrej Karpathy, the co-founder of OpenAI, who actually coined the term “vibe coding”. It’s 40 minutes, but worth watching. He makes 2 pretty key points:
- Now is a really exciting time to be getting into software engineering
- English has become the most powerful language for software development
This is exciting, but it’s worth underlining just how much the daily experience of software engineers is changing.
Like with anything, there’s a lot of hyperbole in these posts. For one thing there’s a lot of talk about vibe coding and not a lot of actual projects being shared. That’s telling. However, there are enough people saying that their day-to-day tool use has changed for me to take notice. These aren’t people who are claiming to make the Next Big Thing, these are just every day engineers doing the work. And it’s speeding them up in a huge way.
Importantly, I think, Karpathy points out that this shift isn’t binary. Engineers aren’t only adopting AI tools. They’re learning AI skills and then becoming faster by deciding when and how to implement them alongside their existing, traditional coding knowledge.
This, I think, creates risk and opportunity for content designers.
The expectation of process speed is increasing. If we as content designers continue to operate with mental models shaped by slower and more deterministic workflows, while the rest of an organization adapts to tools and processes that assume constant iteration, we fall behind.
So, like engineers, we need new skillsets alongside our existing ones.
This is crucial: this shift will happen even as we acknowledge that content problems have been, are, and will be, the same.
You’re also not going to see this in job descriptions, by the way. Nothing says, “adapt a new mental model of content work”. But I can tell you that based on the conversations I’m having with content design managers, overall design managers, and ICs, there is a growing expectation that you need to do more in less time. It’s just assumed.
How content designers can stay ahead in 2026
Far be it from me to tell you how you should do your job. But based on what I’ve heard from people of all ranks, skill levels, and certainly geographies, these are some things you ought to consider doing in 2026:
Learn more about context engineering
It’s amazing how many content designers I speak to – and how many conference talks I’ve seen – that discuss how to prompt properly. This is quickly outdated and you ought to move beyond it.
It’s not that prompting skills are dead (far from it), it’s just that there are more powerful ways to work. The trick to speeding up your delivery is to create a repository of guidelines, instructions, patterns, etc, whatever your organization uses to get content work done. This practice – a form of context engineering – informs AI tools that you can then use in your daily work.
I’d take a guideline from Michelle Savage’s article here:
“If you’re not using AI tools daily, it’s time to catch up. This isn’t a nice-to-have anymore. I won’t recommend specific tools, but I will say I have three open all day long. Most AI-forward content designers I know are the same.”
This is the head of content design at PayPal, so it’s worth taking what she says seriously.
The pushback I hear against this is, “I don’t want to lose my brainpower”. To which my response is: you don’t have to. You can use your brain to decide which tasks you want to spend more time on, and which tasks you can “delegate” so you don’t have to worry about them. I would never suggest using them for anything and everything. Be conscious about what you’re handing off and what you’re not.
But, I do think you ought to keep in mind that your work isn’t purely creative. Text is infrastructure. It’s a building block. It’s becoming closer and closer to engineering over time, so it’s worth thinking about text as a layer of code. You don’t need to spend as much brainpower and passion on an error message as you would an onboarding flow (email me to argue about it if you want).
How can you create this repository? Open Claude, GPT, Gemini, Voiceflow, Chatbase, etc, and start a project. Upload all the guidelines and files you need that inform how you write (ideally in markdown), and create a system prompt that instructs your tool on how to respond. Even half an hour of messing around will come up with something cool.
As for what you use it for? You decide. I’ve seen apps that help anyone create UX content to ones that assist engineers in creating error messages.
But that depends on you having documentation. Which brings me to our next point.
Your documentation is more important than ever
Whenever we talk about this in our workshops, content designers have a pretty stark realization: “oh crap, I don’t have enough documentation.”
Content design relies on a whole bunch of patterns and thinking that is implicit to the writing process that you will have gathered up over a lifetime of writing and reading. You understand how grammar should sound, you understand how an error message should be written, and I’m not just talking about brands and words to avoid. I’m talking about patterns.
These patterns and guidelines are essential for creating context for any type of AI tool. You probably already have much of this in a design system or (worse) in static documents somewhere. If you don’t, you need to:
- Write down how content is written for your product
- Identify the patterns for how that content is created
- Write it in a way that machines can understand in any context
The simplest way to do this is in markdown files that can be moved between different contexts and programs, or stored on an internal wiki. Component guidelines, brand guidelines, content patterns….all the rules that guide your content need to be written. They need to be documented.
This is a pain for people like me who are naturally bad at documentation. But it needs to be done. I hate the term “second brain”, but it pretty much needs to be that. Importantly, you ought to think about creating this documentation in simple, accessible, and transferable ways that can move across tools. That’s why markdown files are good – they’re easy to shift across contexts.
The real benefit to this whole process, though, is being able to think carefully and write down everything that informs how you design. The more you do that, the more powerful your machine assistants can be.
Take a look at the repository here published by Adedayo Agarau, who worked on Grok. You might find some examples here on how to format your own repositories or structure them.
Closely align text assistance to where people do the work
This is more AI-adjacent than specific, but you need to make sure your documentation and tools are close to where people are doing the actual work.
So many content designers, I’m sorry to say, end up creating documentation that sits alone, unused, on a wiki somewhere. We send out emails and say, “good news everyone, the style guide is here!” and of course, no one uses it. I’ve made this mistake as well.
Your guidelines and tools need to be connected to where people are actually doing the work. That might mean in Figma, it might mean in a CMS, it might mean in some form of review. But the more gaps you add between where people are doing the work and the rules they need to follow, then you’re not going to get any adoption.
This is far easier than it used to be. I mentioned Christopher Greer’s Claude Skill earlier – he created a utility that extracts text from a Figma frame and reviews it based on a set of criteria. You don’t have to do exactly this! In fact I’d recommend you find a solution that works for your own organization rather than copying someone else’s. But it’s an example that shows anyone can create tools to more closely align text and design at the point of creation.
Beware of advice that says this is going to be easy, or “just use this one single tool” to get the job done. People work at all sorts of organizations with severe technical constraints. Some can’t even use AI at all. Tools that plug into Figma for text editing can be expensive and you may not have the budget. User interface text might be scattered across a few different sources.
Whatever your constraints are, just think carefully about how you can get all that great documentation and guidelines we mentioned earlier, into a living, active system where people are actually doing the work. Make it living.
You need to understand tech
I don’t think it’s a coincidence that more content design projects are starting to show up on GitHub.
Not because content designers are suddenly becoming engineers, but because the work itself is moving closer to engineering. That shift is a direct consequence of how LLMs and natural language are changing the way software is built. English is increasingly acting as an interface to systems, not just to users, and that pulls content designers closer to the underlying mechanics whether we like it or not
Some of the concepts that increasingly come up in content design work include:
- Structured formats like JSON. LLMs work far more reliably when information is explicit, labeled, and predictable. If you understand why machines prefer structured inputs, you’re better equipped to design content that behaves well at scale.
- How content is passed between systems. APIs, services, and pipelines determine how content moves, where it can be reused, and where it breaks. Knowing this changes how you think about patterns and ownership.
- Model Context Protocols (MCP) and similar approaches. These define how models are given access to tools, data, and instructions.
- Versioning and deployment. Understanding this helps you design content that can survive real production conditions.
- System prompts. Understanding how to write solid system messages and prompts makes a huge difference in AI tools that work well and those that fail.
When you can speak to how content flows through systems, engineers tend to pull you in earlier. Plus you gain more allies.
Evaluation and LLMs as judges
In AI-assisted environments, you need to evaluate huge amounts of content. Most content teams already have these criteria, but they’re rarely written down. You know what clarity looks like. You know when something feels misleading, overly confident, off-brand, or legally risky. You know which edge cases matter and which ones can be ignored.
You need to externalize this into a system for two reasons.
First, it gives humans a shared standard. Instead of subjective debates about tone or quality, you have agreed-upon lenses to evaluate against.
Second, those criteria become inputs to the system itself.
Using LLMs-as-a-judge has a lot of inherent risk, and it’s usually used for quantitative benchmarks rather than qualitative output. But content designers have a role to play here, because qualitative evaluation is inherently a content problem!
Writing evaluation criteria forces you to confront trade-offs you’ve been avoiding. What matters more: warmth or precision? Consistency or flexibility? Speed or caution? Humans can hold those tensions intuitively, but systems can’t and need you to decide for them. Which, by the way, also increases your influence. I constantly hear great stories from content designers who are now working more closely than ever with engineers to create these types of evaluation criteria.
Which brings me to my last point:
Don’t avoid learning due to the fear of replacement
One of the things I hear from content designers is, “if I create all these AI tools, aren’t I going to be replaced?”
Instead, what I hear from the content designers doing this work is that it actually creates more work for them. You may very well find that this approach increases your influence and the respect people have for you.
Oh, and keep your skills sharp
Don’t slip. The only reason you’ll be able to wield influence with all of these other skills is if your essential content design skills are strong.
Can you still make sense of the content mess? Are you up to date with testing methods, and can you determine how to action insights from users’ comments? Are you able to demonstrate the impact of your work?
All of this matters more than ever.
Content design in 2026 is about influence through skill
The content designers I see having the most impact right now are leaning into this shift. None of this is replacing any of the core of what content designers do, it’s all a continuation of that work. It’s just being done in different ways, and as a result, content designers are being pulled earlier into conversations that often used to happen without them.
There are no guarantees here. But if you wanted to make sure 2026 is the year where you have some impact, this list is where’d I’d start.

