Train your technical writers in UX writing – join our training day on October 21. Learn more

GPT-3 is coming. Are you ready for AI?

UX writers and content strategists need to start preparing for GPT-3 - and we can do that by articulating the value we provide now.

The tech industry (or at least, the people on Twitter) got a bit excited this week when OpenAI released its GPT-3 text-generation tool for public use. The results were exhilarating, scary, and concerning all at once.

They also brought a specific truth to light: UX writers and content strategists must adapt now before businesses start believing they can use AI to replace them.

The question is, how do we do that? That’s what I want this post to explore: how the tools and strategies we use may not make us AI-proof, but at least AI-resistant.

What the hell is GPT-3?

GPT-3 is a text generation tool. Produced by OpenAI, the tool was released in API format this month and the experiments slowly started trickling in.

Basically, the way it works is this: you give GPT-3 a prompt in text. It could be anything, like, “A blog post about cats.” GPT-3 takes that prompt, generates a bunch of text, and then produces an output.

For a slightly more technical explanation, GPT-3 is a language model. It basically takes a huge range of text inputs (including sources like Wikipedia), and then uses them to create new text. The difference with GPT-3 is that it has a huge number of references. Like, 175 billion references. 

Which means it can create text from only the most basic prompts.

Just check out what GPT-3 has been able to do:

Designer Jonathan Lee even wrote a piece about how GPT-3, as part of a Figma plugin, created a functional design prototype from raw text.

Source: https://uxdesign.cc/lets-talk-about-that-gpt-3-ai-tweet-that-shook-designers-to-the-core-d2b31ad3d63b

GPT-3 still has flaws

Of course, artificial intelligence carries the sins of those who program it. In this case, GPT-3 creates text by referencing a huge library of existing text, and in many cases that existing text is incredibly racist, sexist, and homophobic. 

Smarter people than me have pointed out that when prompted to write about topics like “women”, the GPT-3 algorithm produces some pretty horrific stuff. The researchers themselves acknowledge this:

It’s also very much worth pointing out that one of the main figures behind GPT-3, Sam Altman, thinks much of the reaction in the last few days needs to, you know, calm down.