Conversations about data are often focused on technical fields, like development and product management.
When it comes to writers, data-driven decision making becomes a little fuzzy. How does it even work? How can you, as a writer, incorporate data into your work?
Not only that, but how can you make sure that you end up testing the impact of copy, separate from other elements on the page? (In other words, how can you AB test copy?)
UX writers and content strategists who are able to understand data, how to AB test copy, and the decision-making process that relies on data, will have a huge competitive advantage.
But it might require a little shift in your thinking…
Explaining the data-driven process
Before we start talking about how to apply a data-driven process to writing, we should talk about the term itself first. What does data-driven actually mean?
Being data-driven is less about the figures, templates, and charts and more about a “mental model” that brings you to a specific conclusion.
Applying that mental model follows a similar pattern, every time you use it. When faced with a creative decision that relies on data, ask yourself:
-
- What is the primary, content-related question you are trying to answer? What is the actual goal of your design challenge? This shouldn’t be difficult— it’s something writers and designers do all the time.
-
- What data will help you answer that question? That helps you focus on only the data that will help you answer your current problem and stops you from “data creep”, whereby you just keep collecting information for the sake of collecting it.
-
- What data is incomplete? Think about what is missing from your dataset. Does that cripple the amount of information it can provide you?
-
- What alone can the data tell you? Work from first principles: what does the data itself say, when not connected to anything else? For instance, a heatmap might tell you 50% of people scroll beyond a certain point. It doesn’t tell you why.
-
- Can you trust the data? Is it from a trustworthy source? Where does it come from?
-
- What context does the data add that it might not otherwise provide?
These questions show that working with data isn’t just about grabbing information whenever you need it—it’s about following a process that leads you to the right information, at the right time, for the right problem.
How does the data-driven approach apply to writing?
The more UX writers work with data, the more that question becomes irrelevant. But the question remains: how can UX writers incorporate data into their process?
Apart from asking the right questions, UX writers should recognize they have a plethora of data sources available for them to use in solving design problems. For instance, information can be gathered from:
-
-
- User testing sessions
-
- Interviews
-
- Telemetry (what people are actually doing inside your product)
-
- Card sorting exercises
-
- Heatmaps
-
- Screen recordings
-
- Eye tracking
- A/B test results
-
The big question, of course, is how can writers incorporate that data into their process?
Let’s take an example.
How to A/B test copy
It helps to speak about this process in as much context as possible. As a UX writer or content strategist, one of the projects involving the most amount of data points is an A/B test. Let’s run through this data-driven mental model in that context.
What is the purpose of the exercise?
A/B tests are designed to determine whether one version of a page performs better than another. For example, let’s imagine we work at a cereal company and want to test some new copy. We have our goal: to increase conversions. That’s where we start.
What data is relevant to my problem?
Let’s think about all the data we can use to determine our “version 2”. Remember, this isn’t about using all the data you could possibly get: it’s about determining which data is most relevant to the cause.
We can start with:
-
- Heatmaps. These can show us how users are engaging with the original page.
-
- User testing. If these pages were tested in person, we can go back to that research and understand what users did and didn’t like, even if it’s only qualitative as opposed to quantitative evidence.
-
- Previous A/B tests. Let’s say we have 5 previous A/B tests on this page. We should examine all of them to make sure we don’t retread any ground, especially if those tests involve copy.
-
- Traffic and conversion data. Using a chunk over the previous three months, we can see how this page has performed, especially in comparison to other pages on the website.
What are the limitations of this information? What data is missing?
Remember, first principles. That is, what does each piece of information say about itself? What can we gain from looking at the information in front of us and nothing else?
-
- Heatmaps. The maps show us only 25% of users scroll beyond the fold.
-
- Previous A/B tests. These tests show us users converted more on shorter pages than longer ones.
-
- Traffic and conversion data. Over the past three months, organic traffic has declined and conversions are down 10%.
That’s what the data tells us. Anything about the why is an inference. It would be easy to say that users preferred shorter pages, but that isn’t necessarily true. All we know is that users convert better on shorter pages. Those aren’t the same thing.
Using a data-driven mental model means being very specific about what you know and what you don’t know. As you can see, we’re missing certain data. We don’t have interviews with users telling us what exactly they like about each version of the page: we only have numbers and conversion data.
Create a matrix to visualize what the data tells you
Thinking about all this information can get confusing. Start placing what you know and what you don’t know into a matrix, like this:
Data Point | What does it tell us? | What don’t we know? | |
Heatmap | 25% of users scroll past fold | Highest interaction on banner CTA | Why aren’t users scrolling further down the page? |
A/B Test #1 | 10% conversion increase when page length halved | Did copy impact conversion? | |
A/B Test #2 | 5% conversion increase when page length reduced by 25% subsequently | Did copy impact conversion? | |
Traffic | Organic traffic down 10% over past three months | Is this specific to our company or industry-wide? | |
Conversion | Conversion down 10% over past three months | What outside of the page is impacting conversion rates? |
Create a hypothesis and craft a test
Now with our information outlined, we can start crafting a hypothesis. There could be several different reasons why people aren’t visiting or converting, and an A/B test might pinpoint any suspected reason.
The important part is that because you’ve been through this mental modeling process, any hypothesis you create is based on reliable information.
Someone might be able to critique the hypothesis you create, but at least that hypothesis is based on facts. That’s the difference between good UX writers, and great ones who understand how to incorporate data into their work.
Practice makes perfect!
This isn’t something you can learn overnight, and learning how to AB test copy is just an example. The same principles apply when you’re creating an entirely new online flow, or creating new screens in an app, or craft an entirely new email series.
Using a data-driven process within a writing context takes time, but it’s important you get it right. By embedding yourself more in a data-driven process, you’ll prove yourself a valuable ally in the next decade when hiring managers and businesses are looking for creatives with data-based skillsets.
You don’t need to learn to code, you just need to change your thinking. Once you do, you’ll have a competitive advantage that can’t be beat.