As a writer, you’ve probably crafted cracking copy for your audience without really thinking about it. You know what works because you know what great writing is—and what it isn’t. It’s innate. With UX writing, the value isn’t actually in your words, it’s in the value those words offer users and the business. Using content tests and data to prove that value should be your north star as a UX writer, as UX writing sits at the intersection of data and creativity.
Getting that data in a way that offers insight rather than noise can be a challenge. Here are five tests you can run to help you develop rich data and curate insights you can use to make your words better for everybody.
1.The A/B test
A/B testing absolutely every variable is how Netflix made itself the world-class product experience it is today. Every UI change, including copy, underwent some sort of A/B testing to make sure that the changes being made were the right ones.
Simply put, A/B testing is the testing of a variable against a control to see which one comes out on top on a given metric. For a call to action (CTA) button, this would be clickthrough rate (or how many people are clicking the button).
You can show half your users one variable while you show the other half the control (your product team can help you set this up) to see what impact your changes may actually have.
What an A/B test log may look like—as long as it’s organized, you’ll get value out of it
It’s super important that you keep a log of the tests you’re running, the tests you’ve already run, and the ones you want to run so you don’t end up doubling up. You don’t need to do anything too fancy for this—you can just put it in a shared spreadsheet.
Finally, make sure you assess the results for statistical significance to make sure that there is a measurable impact, or whether the numbers you’re seeing are a mirage. SurveyMonkey has a great tool for working this out.
2. Flesch-Kincaid test (and other readability tests)
How do you make sure your words are easily understood by the largest number of users? Readability isn’t just a matter of figuring out whether your copy is assisting users to complete their task, it’s also a matter of accessibility.
Assigning your copy a Flesch-Kincaid score is one of the more tried-and-true methods of assessing readability by judging individual word syllables and sentence length. As a rule of thumb, you should aim for a score of 8, but this may dial up or down depending on your audience. You can take a deep-dive into Flesch-Kinkaid here.
There are other tools you can use, however. For example, while undertaking a Teaching English as a Second Language course, I bumped into VocabKitchen.
Tools like VocabKitchen can help identify terms which may be difficult for international audiences
It’s generally used to assess a text for suitability for various grade levels, but it will help you identify any difficult terms for non-native speakers of English. This is particularly useful if you have an international audience, but you’re not quite ready for full localization yet.
3. The Subject Matter Expert (SME) test
This isn’t so much a technical test as a stress test for your copy, especially if you’re working with a B2B product. Quite often, subject matter experts will know your users better than you, as they spend all day every day talking to them. For example, some of the SMEs you will work with are sales representatives.
They’re on the front lines with your customers, talking to them about their worries, stresses, and tasks to do—all while trying to sell them products—which is great insight for any content design work you may need to do.
They’ll be up on industry specific jargon, so they’ll be a great judge on whether the language you’re using is too complicated, or not specific enough. This is also where challenges to your copy will come up, and this is a great thing. If you have the data to