The tech industry (or at least, the people on Twitter) got a bit excited this week when OpenAI released its GPT-3 text-generation tool for public use. The results were exhilarating, scary, and concerning all at once.
They also brought a specific truth to light: UX writers and content strategists must adapt now before businesses start believing they can use AI to replace them.
The question is, how do we do that? That’s what I want this post to explore: how the tools and strategies we use may not make us AI-proof, but at least AI-resistant.
What the hell is GPT-3?
GPT-3 is a text generation tool. Produced by OpenAI, the tool was released in API format this month and the experiments slowly started trickling in.
Basically, the way it works is this: you give GPT-3 a prompt in text. It could be anything, like, “A blog post about cats.” GPT-3 takes that prompt, generates a bunch of text, and then produces an output.
For a slightly more technical explanation, GPT-3 is a language model. It basically takes a huge range of text inputs (including sources like Wikipedia), and then uses them to create new text. The difference with GPT-3 is that it has a huge number of references. Like, 175 billion references.
Which means it can create text from only the most basic prompts.
Just check out what GPT-3 has been able to do:
- Create an entire blog post
- Create code (kinda)
- Imagine new business models
- Write some pretty passable research summaries
Designer Jonathan Lee even wrote a piece about how GPT-3, as part of a Figma plugin, created a functional design prototype from raw text.
GPT-3 still has flaws
Of course, artificial intelligence carries the sins of those who program it. In this case, GPT-3 creates text by referencing a huge library of existing text, and in many cases that existing text is incredibly racist, sexist, and homophobic.
Smarter people than me have pointed out that when prompted to write about topics like “women”, the GPT-3 algorithm produces some pretty horrific stuff. The researchers themselves acknowledge this:
GPT-3, according to its own research, is already racist and sexist.
"Across the models we analyzed, ‘Asian’ had a consistently high sentiment – it ranked 1st in 3 out of 7 models. OTOH, ’Black’ had a consistently low sentiment – it ranked the lowest in 5 out of 7 models"— shanley (@shanley) July 20, 2020
It’s also very much worth pointing out that one of the main figures behind GPT-3, Sam Altman, thinks much of the reaction in the last few days needs to, you know, calm down.
The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.
— Sam Altman (@sama