The Interface is a brand-new podcast exploring trends and hot topics for UX content people.
This episode is a recording of a LinkedIn Live event held on Tuesday, May 30. Chelsea Larsson, Director of Experience Design and Head of Content Design for Partnership Experiences at Expedia, joins Patrick for a live podcast recording to discuss AI in the content design process and how content designers can scale their impact ethically.
Available to listen
Patrick Stafford: [00:00:13] We don’t have a lot of time and we could talk about this for days rather than an hour. So we’re going to get straight into it. Today I’m joined by Chelsea Larson. Chelsea heads up Experience Design and Content Design at Expedia for partnerships. And today we’re going to be discussing the intersection of AI and content design. Chelsea, thank you so much for joining me today. We’re going to get into the exciting stuff you’re doing with AI at Expedia. But firstly, I just wanted to ask you, there’s so much going on with AI and even I think six months ago, people’s minds were blown when GPT came out. But ever since then, it’s kind of like new things have been happening every day, every week. Personally, I think the thing that’s blowing my mind the most is that every single day there’s a new piece of software coming out with AI capabilities. And not just that, but tools that I use every day are now integrating GPT or other types of language models. Every so often there’s a tech explosion that seems hype. A lot of stuff in cryptocurrency was that way. But AI is actually integrating in stuff I use every day. And so it feels quite different in that sense. I’m wondering what is your sense of the overall market for AI right now, not just in content design, but for you personally? Is it blowing your mind as well? How are you taking it all in?
Chelsea Larsson: [00:01:56] Yeah, absolutely it is. It’s phenomenal. I was just telling my parents in Ohio, who are so far removed from all of this, just how quickly and rapidly the tools are evolving. And I think that’s what’s blowing my mind. Like what you said, every single day, there’s a slew of new plugins integrations that you can use in order to expand on the technology that we already have at hand with, you know, ChatGPT, OpenAI, Midjourney all of the things that we’re now used to. Now there are all of these tools on top of it that can allow you to spin up a chatbot of your own using your own PDFs or to create videos and music and anything that you want is kind of like at your disposal if you just figure out how to use it. And I think that’s kind of the difference between crypto and AI is that the bar is very low for entry. It is very easy to use these tools. And so that’s another thing that’s kind of blowing my mind right now is how quickly they’ve integrated into my life. I barely use Google anymore to search for things.
Patrick Stafford: [00:03:10] Yeah, I think it’s just the ease of use for connecting everything. I think though, the moment that I find this most similar to is when the iPhone first came out and specifically when the app store first came out because a lot of what seemed to happen then was, I think I’ve made this comment to you before in another conversation we had. But there are a lot of technical demonstrations right now. There are a lot of people saying AI can do this or it can do that. And a lot of those are really cool, but there are not necessarily a lot of instances of being shown how to use AI in your day-to-day work that’s actually useful. And there’s, I think, a big difference there between: AI can write a story. Cool, that’s neat. But it doesn’t really help me in my day-to-day work, you know? What are some things that can actually do to help me improve my productivity or scale? And I think that’s where it’s been a little bit harder to identify the benefits there. As you said, I’m using it every day. I think it’s just about making sure that you understand exactly what you need from it.
Chelsea Larsson: [00:04:34] Yeah. I think you and I were talking about that earlier where there’s a difference between being jazzed about the tool and the functionality of the tool and then the application of that functionality. And that’s where I think we’re going to see the biggest advancements in looking at how people are creatively applying this technology to achieve outcomes that they couldn’t have without it. And for content designers, I mean, what comes to mind immediately is iterative content. We have to manually create 50 versions of a string, for instance, for maybe a testing and learning situation where you want to put something out there in lots of different varieties. That time that it would take you to write those has now gotten exponentially smaller because you can generate that content very, very quickly, and then you can go back, review and edit, which you would have already had to do anyways on top of drafting it. But now you can apply this tool to expedite your drafting process. And I think that’s where we’re going to start to see really interesting outcomes is looking at that application layer.
Patrick Stafford: [00:05:53] Absolutely. That’s actually part of what we’re going to discuss today. I kind of want to look at this in two different ways. We’ll talk about, you know, some of your personal experiences doing this internally. But I think there are a couple of different ways we can look at it. The first is for content designers who want to use this in their day-to-day work. What are some ways they can do that? How can they do things at scale? And then the second is how can content designers on their own use AI to influence and impact their organization? How can they grow the influence of content design using AI? And that’s where I think, there are a lot of nervous content designers out there saying: are our jobs going to go away? Is AI going to take over? And I don’t know about you, but actually, over the past six months, more and more, I’m seeing more stories, more anecdotes, more, I guess, what’s the wordd…more reassurance from even OpenAI itself or Google or the creators of these large language models, telling people you as the human are a key partner in telling AI what to do. There was a story today or yesterday about a lawyer who relied on ChatGPT to source some cases and to provide some arguments and got in trouble because the AI hallucinated and just completely made something up. I think these stories just reinforce and it’s the natural backlash to the early excitement about, you know, you as the human are the key player here. And I think that should be of great reassurance to content designers. I’d love to hear your thoughts on that.
Chelsea Larsson: [00:07:53] Absolutely. So first of all, that was hilarious with the lawyer, but also. Yeah, sad. Yeah, So I agree. I think so. There are so many articles I’m seeing on Medium. Is this going to take our job? Is this going to replace UX writing? And I would just like to ask those folks, you know, what is your job? What do you think your job is because you are not your job title. This is something that V. Sri And I talk about in our newsletter, Smallish Book, and it’s all about writing. And as a writer, your job is not just to output text. That is the very bare minimum of what you do. You guide people along a journey. You source meaning out of chaos. You create architecture around information that makes it easier to understand. You make the world easier to navigate. All of that type of systematic thinking is what also helps AI. So the knowledge base work that the AI is trained on, that’s content modeling. The pre-training that has to go on, that’s ontology. The prompt engineering. These are the standards, these are the definitions of good that you have to give as a content designer. So all of the work that’s being done actually draws on our greatest skills. And that is why I am very confident in our role in AI. And I would reassure people to not think that your job is going to be taken away, but to really expand your definition of what your job is.
Patrick Stafford: [00:09:37] Absolutely. We’ll talk about this in the next 50 minutes or so, but if you’re training, for instance, a custom instance of GPT or another large language model, I think people don’t necessarily understand that you can’t just give it a bunch of text and then have it work properly. It needs to be organized, it needs to be structured. I mean, these are the basic tenants of information architecture, of content hierarchy. These are principles that we use every day. And it’s why I think that content designers need to be leading the charge in their organizations and going, if you want to deliver impact as an organization, the success of your AI capabilities is going to be driven by the content that’s underpinning it. It needs to be structured well.
Chelsea Larsson: [00:17:10] I think that’s a really good point. And I mean, so I’ll just speak to my own personal experience. This feels like so long ago, but it was just maybe a month or a month and a half ago. But when I got started with it, a lot of us on the content design team were working with our ML engineers or our NLG engineers and saying, okay, here are our standards. Can you just put these into the training and or into the playground and train ChatGPT on our standards? So, you know, those are just the typical: use active voice and use this many characters. And the engineers are like, no, actually it is much more important if you just give me examples of good writing because ChatGPT is a predictive model that looks at patterns and probabilities. You will get much more effective generating responses if you have a big corpus of work of really good content and written in the way that you want written, in the tone that you want, because that will give them those patterns to recognize and then to replicate. So that was a learning that I had.
Chelsea Larsson: [00:18:29] And another learning is, you know, there’s the zero-shot model where you just ask ChatGPT a prompt and look for a response. But then there’s the few-shot model, which is where you give it examples. And so if you’re going to say, you know, give me a playlist for a birthday party, it’s going to give you a wide variety of songs. If you say, give me a playlist for a birthday party, the birthday girl loves, you know, Lizzo, Beyonce, and CNC Music Factory. Now you’re giving it patterns to recognize. So I absolutely agree with you that for content designers to lead in this space, they really can’t be afraid to get technical. But the technical parts of this are all built on how language is structured. And this is how we already think. So, we don’t have to get our foot in the door, our foot’s already in the door. We just need to walk through it. And that, I hope, is reassuring to people who feel like this is a new technology that they have to learn.
Patrick Stafford: [00:19:33] Yeah, and it’s why I think, I love everything you just said. And that’s why I also, when I hear people saying like, oh, I’m really nervous about AI taking our jobs. Maybe this is a little bit mean, but my first thought is, how much have you actually looked into it? Because, sure, there will be organizations that stupidly decide, cool, all of our developers can use ChatGPT to write strings now, right? But to me, those organizations never really put a value on content design anyway, you know? So you were never going to advance or have a lot of influence there in the first place. But for those organizations that are questioning how do we do this? How do we step up? To me, the content designer has the ultimate role there. I want to comment on something you just said about prompting and giving examples because I think that speaks to the nature of how we use AI in our day-to-day work. And I know that the topic of this conversation was about how to go beyond that. I want to speak about that just a little bit and then move on, because I think people don’t necessarily understand how important it is to give context for these types of prompts, but also why the essentials of giving context mean that your role can not necessarily be replaced in using AI. If you look at the best outcomes in using a language model for your work, everything you are telling it is something that you would have to tell, say, junior content designers on your team if you are going to have them write the strings anyway, right? So you’re going to have to say, as you mentioned, it’s few-shot prompting. For those who haven’t heard that term, you can sort of just replace the word shot with example. You’re just giving the model examples of what you want. And so you can’t give the AI model something unless you have already gone through an initial customer research process, unless you have gone through the process of defining the problem that it is that you want to solve. This is all basic design thinking stuff. So when you go to the AI and you say, I’m designing this type of app, I’m in this early stage, I am coming up with some initial strings for an onboarding experience in this screen. I’m doing this and I want, you know, this particular line. I want it to be 60 characters. Can I have 20 examples? Right? That is, there is so much context built into that that the AI cannot do. You needed to have reached that point initially, and AI cannot replace that. You need to be an integral part of that. And I think that’s worth underlining because the hype around AI is that it can do everything and it really can’t. So I think it’s just really important for us to underline that as content designers, you need to be in that step-by-step process to even get to the point where you can use it.
Chelsea Larsson: [00:22:59] And that’s to me, and I think to many of us, that is what writing is. And this is something again, that we talk about in Smallish all the time. Writing is that mindset where you’re getting all of those thoughts together and compiling them in a way that now some simple, clear message can come out. Oh, no, I think we just lost Patrick. But I’ll keep going. If you guys can hear me, I’m sure he will join back. But that is what writing is. And the word production part of it is just the tip of the iceberg about what we do. And Patrick’s back. And that is why, at this stage, AI is not going to be replacing that part of what we do. And we can talk about the difference between narrow AI and general AI. And if people aren’t familiar with that, narrow AI is what ChatGPT is, it’s what Midjourney is. It’s AI that’s trained to do a very specific task. General AI, I think, is what people are really nervous about. And general AI is where the AI has the ability to understand and learn and apply knowledge and problem solve. And that’s what the movie Her was. You know, it’s like AI is everywhere, and we are not there. We are not nearly there. So if that’s what people are worried about, just set that aside until the time when we have to cross that bridge. But with narrow AI, Patrick, I absolutely agree with you. We are the ones who are feeding in all of that, all of that structure for the tool to do its job. And so that is how we are leading it and that is how we should be utilizing it as a tool.
Patrick Stafford: [00:24:42] Yeah, absolutely. And for that I would definitely recommend every content designer head over to OpenAI and read their documentation about how these models actually work, not to get too deep into it, but the idea is that these models work by tokenization and tokenization refers to the idea that words and paragraphs and basically all the text is broken down into pieces, not just pieces of words, but pieces of letters. And each of those is called a token. And it’s all based on the relationships between those tokens and the predictive abilities between those tokens and so on. I think the thing is you don’t need to have a huge technical understanding of how these things work. You need to understand the basics and the structure of it, but you don’t need to be an engineer. You don’t need to be a developer to lead in this space. You’re a content designer. That’s what you do. That’s what you do best. You shouldn’t feel like your role is being taken over by machine learning. No, you still have a huge integral role in leading how these models are used within your organization.
Chelsea Larsson: [00:26:00] I can’t speak to the projects that are happening at Expedia, but I can speak to the content design involvement and every single project that I’ve seen going on with a text component along with ML, if there’s any text being generated, the engineers are immediately coming to the content designers for questions on structuring the content, like I said, the quality assessment of what’s good. So there is a real need, and engineering sees the need, to partner with people who understand what good writing looks like and how it performs. So our role, I actually see becoming almost even more integral than it is today.
Patrick Stafford: [00:26:54] I think that’s so reassuring for people to hear because I think people are, I know, for instance, I’ve heard from students who are reaching out and saying, how should I think about this? What should we be doing? What’s the next step here? What, you know, how are we going to navigate this really tricky period. But so for you to come out and say that the work you’re doing is already seeing relationships grow between developers and engineers and content designers, is so hopeful and positive for people to know. There’s growth here, there’s opportunity here. I think now is a really exciting time because it’s early, you can jump in and you can really, as a content designer, you can go to your organization and define the approach, help define the path that you as an organization can take. And one of the things we know about content design is, is that they love or sorry, not that they love, they lack a feeling of influence and impact in their organization. Well, here’s the opportunity, right? Like here’s the opportunity for you to have an impact.
Chelsea Larsson: [00:28:14] Yeah. I think if your company is at all dabbling in AI, generative AI, text prompting, and they don’t already have a responsible AI guideline or principle, or if they’re not utilizing ones that already exist, like Microsoft’s responsible AI guidelines or the constitutional AI from Anthropic, that is a place where you can lean in immediately and say, you know, language has repercussions. How are we going to be ethical here? How are we going to be responsible and be that person? Say, I’m happy to look at the existing responsible AI guidelines that are out there. I’m happy to draft some for our own company to review and think about. You can also start working on text guidelines. Like I said, what are the standards for prompting? How are you going to help other people have consistently good content coming out of their prompting exercises? So there are so many places, or you can just start doing it yourself for your own projects and see if it expedites your own process. And then teach other content designers or engineers about your own process. So there are just so many places here where people can dive in at the governance level, at the practitioner level, at the leadership level, and you can have influence in all of those different stratas of working. And all of it will be helpful for your company at this point because as you said, Patrick, it is early days.
Patrick Stafford: [00:29:52] Yeah, and that’s really exciting because you know, it is early. And so the good thing about it being early is that there’s nothing well, I wouldn’t go so far as to say you can’t do nothing wrong, you can’t do something wrong. But I think there’s a lot of opportunity for experimentation here, which is great. And I want to build on a couple of the examples you just mentioned, because I think one of the exciting things in language models right now is the idea of custom instances. So you’re able to give a model, your own text, your own structure, and then use GPT to train it on that particular text. And there are, we were just talking about this, there are a few tools online now that are allowing you to do this without having to download Python and download a whole bunch of frameworks and create a local instance. You can actually do this online and with very little coding involved or no coding really. And as content designers, to me this is a huge gift because one of the things that we heard in our salary survey and industry survey this year was that content designers feel like they lack influence, right? They lack impact, and they want to see that grow across an organization. And I think these custom instances are a huge gift because they enable a much more natural interaction between anyone in your organization and content design guidance? I won’t say rules, but the best practices, right? So I’m thinking about a custom instance where people can put in some text and have it checked against a style guide or, not just that, but could you create an instance that is plugged into your design system? And you just mentioned, are you able to create a system where you can create guidance for prompts, you know, and the ability for content designers to create this type of wide-reaching infrastructure, really changes the game in terms of how we can affect an entire organization.
Patrick Stafford: [00:32:21] Imagine if you were able to go out…I’ve spoken to content designers who work in an organization where not all the content is centralized, right? So you have content being written sort of by people on different teams and product managers write a little bit. And developers write a little bit and they get a content designer to come in and the organization wants things to be cleaned up. But because everything’s so fragmented, it’s really difficult for them to look over everything. If you were able to use a custom instance of GPT that’s trained on style and substance and structure, to have people then check their writing against that is a huge, huge improvement. Just on where you were before. And to me, that’s just one example of how content designers can scale their impact by working on these types of tools. To me, it’s just super exciting.
Chelsea Larsson: [00:33:25] I think that’s where we go back to that application layer again. It’s like, how can you, as a language expert, apply this tool to make your quality higher across your company? So what you’re talking about would be like a consistency layer, getting those standards across the whole company. How can you make it so that everyone’s drafting much faster so that we can spend more time on strategic content? So I mean, if you had an internal instance that this is where we get into like proprietary data, which you should never put your company’s proprietary information or data into ChatGPT. But if you had what you’re talking about is kind of like an internal, maybe safe place to do this, then how can you put in a ton of data to look across it for sentiment analysis and use that in your work? So there are just so many applications here that are specific to our roles and the type of work that has historically maybe taken us a long time to do, but that wasn’t high reward for us. I think we can expedite a lot of that now and spend more time doing the high-reward work that we’ve always wanted to do, and that could help us grow our careers much quicker than we could in the past.
Patrick Stafford: [00:34:53] I think so, too. I think that the stuff that I’m really excited about is the fact that there are now these tools where you can create these models super quickly. Really, really quickly. You can get like a custom instance sped up in like ten minutes. Now I would never recommend doing that for you as a company internally. There needs to be rigor and structure and governance across everything that you’re doing. You know, you mentioned not using proprietary material. I think that’s just some of the givens that you need to take into account when creating these. But yeah, I think there’s a huge amount of influence…
Chelsea Larsson: [00:35:40] But you do bring up really good points. So say you’re onboarding a new content designer to your team. This instance that you’re talking about hypothetically that pulls in all of the guidelines that you’ve created in your company. Maybe you have a new content designer, but you don’t have a lot of time to onboard them or manage that experience all day. This could be one of those places where they can type in their questions and get the guidance that they’re needed in a much faster way than it would be if they were looking on Slack, looking across all of the decentralized guidelines, or even asking and waiting for somebody to respond. So I do think, what you’re saying is valuable. But yeah, we just wouldn’t want to put in proprietary information. But for some stuff, it makes perfect sense. And that again goes back to the application layer, using your best judgment at the time that you have with the information that you have at hand to figure out how to apply this tool.
Patrick Stafford: [00:36:42] And in fact, we have actually an interesting question on the event page from Tom. And Tom asks, what are your views on the ethics or lack of in terms of the landscape of AI? And for content designers? I think this is an excellent question because if we are structuring the information that goes into training these models and defining parameters, you know, proprietary information is just one part of it. There’s also, as we know, any type of AI reflects the bias of the people who are creating it. And so if you are feeding it information that contains that bias, then that’s something that needs to be accounted for. There are all sorts of accessibility and inclusivity issues and questions that need to be addressed. I’d love to hear you speak about your thoughts on that very broad topic as well.
Chelsea Larsson: [00:37:40] So I mean, that’s why I go back to…I really think that content designers are some of the most responsible communicators on the planet. We think about our words through a kaleidoscope of filters, readability, inclusivity, contextuality, voice, tone, appropriateness, our list goes on forever. And so I think we need to step up even more here because there is such a lack of ethics there. Even in the way that ChatGPT was built, using underpaid contractors to train the model, I mean, there are layers and layers of ethical failure. And it’s only going to get worse as things scale out, which is why we need responsible communicators to stand in and build some structures around it. And I think I can use this example from work because it doesn’t really speak to any of the underlying business goals that we have. But at one point, we were looking at taking user-generated content and concatenating that data and drawing out sentiment analysis from it or even descriptions. So, you know, user-generated content. I can use another example of, say you have a bunch of restaurants in a city and you want to take all of those reviews and then you want to put them together and pull out some descriptions of what the food is like in that city.
Chelsea Larsson: [00:39:18] You might be pulling racist, sexist, inflammatory, upsetting reviews because it’s just reviews that are created by humans. So you have to put the guardrails in place and say, okay, we’re only going to use things that can be factually proven. We are going to avoid these types of terms, this harmful language. But if you’re not in the position to make those calls, somebody might not make them. And then you’re going to get into a situation where you’re hurting people with this technology. And I think that’s where content designers really need to step up and be that human QA layer of saying no, we don’t, we’re not letting this tool just run wild. We’re putting a human quality layer in place and we’re going to reduce harm as much as possible. And that is our job to do that.
Patrick Stafford: [00:40:16] And I think not just in the writing of…you just spoke to this, but not just in the writing of strings, but I’m also thinking of instances in our day-to-day work. And one of the examples I’ve seen is people saying, well, what we could do is we can do unmoderated testing and then we could feed the transcripts into a language model and then have it do sentiment analysis or bring out topics and the most common issues people have with a particular design. I think that’s useful and I think that there could be some good applications there. But what if you have participants who use racial slurs or clearly have a bias against something? Then are you then able to trust that the language model is able to pick up on that effectively? I think one of the principles of using data, any type of data, whether it’s quantitative or qualitative, is the idea of cleaning it. You need to be able to go in and have a look at the data and make sure that before you feed it through any type of algorithm or model that it’s trustworthy. And so before we even apply artificial intelligence, we need…and this goes back to what you were saying about the structure of information. Is it structured appropriately? Does it contain what it needs to in order to create high-quality output? There’s a lot of reliance right now on just sort of like feeding these models huge amounts of text without even, I don’t want to say the word cleaning, but without even having it go through any type of rigorous analysis beforehand. And I think, as you said, content designers play a crucial role here to make sure that the information we’re giving these models is high quality and is structured correctly so that we’re avoiding as much as possible any type of…we can never avoid every single bias or ethical quandary, but reducing the likelihood as much as possible.
Chelsea Larsson: [00:42:36] We have to remove the amazement factor from our work because what I’ve also seen happen sometimes is just the initial amazement at how ChatGPT, in particular, can take a huge body of data and then create, you know, 76 responses from something. Just the wow factor of how quickly that is happens can kind of overtake our critical analysis hat. And we have to take off the amazement hat and put on our critical analysis hat because something that we’ve realized is when we go through all of those generated responses, they’re actually not at the highest quality and they’re not as good as what we would have expected if we had just done it ourselves. And so you’re right, you need to be involved at the forefront, structuring the data, putting in the these are terms that you’ll need to…I’m talking to the people on the phone…definitions and guardrails. I know you already know this, so content designers need to be putting in the definitions, the guardrails, the structuring, the data with their engineers. And then on the output side, really having your quality scorecard ready and looking at those generated responses and seeing if they are up to your standards ethically, tone-wise, style-wise, all of the different evaluation metrics you have. So it’s an end-to-end engagement for a content designer.
Patrick Stafford: [00:44:15] I don’t want to get too much into a debate about this, but I’m actually really skeptical of teaching prompt engineering as a type of special skill. To me, how effective your ability to use AI will be is directly connected to your ability to understand best practices for whatever design activity your undertaking. So to me, you’re not going to create amazing strings by creating a really great prompt. You’re going to create great strings by understanding the design process, where you are in that process, what parameters you have, who your customers are, what they need, what their limitations are, what their concerns are, and then feeding that into a prompt. You can’t learn that from any type of prompt engineering course. You will only learn that from your experience in the design process. In talking to customers. And so I think to me quality of output is directly connected then to your understanding of best practices, whether it’s UX writing, content research, whether it’s tone, all of that is much more important. Understanding how to use the model, how to use the tool is important, just like you need to understand how to use Figma. You need to understand layers and structure in Figma, you need to understand how to use prompts. But to me, best practices is just so much more important than getting qualified in prompt engineering.
Chelsea Larsson: [00:45:59] Yeah, I absolutely agree with that. And so if I had one piece of advice for anyone who’s interested in this, it would be to develop a perspective on it and the only way that you can develop an informed perspective is by using the tool with extreme curiosity, but also accountability. You know, be curious, do everything you can, but be accountable and ask yourself, is this delivering the quality that I want? Is this feeling ethical? And start developing that perspective of how you can use these tools in your work because you are already an expert in what you do, and this tool is just going to help you expedite what you do and create new ways to do what you do, but it can’t replace what you do. And that’s just really important for us to remember. But if you’re afraid of the tools, you’ll never develop that perspective. And then folks who are using them to do what you do are going to be moving forward in their careers. And that is a real risk that you’d be taking by not engaging at all.
Patrick Stafford: [00:47:08] I think it’s really reassuring for people to hear that from you because it’s one thing for…it’s one thing for anyone to say that. But for someone in your position, heading up content design at a larger organization, you’re actually working on these tools. It adds some validity to what you’re saying or a lot of validity, rather. So I think people should be really reassured to hear that. We’ve got about ten minutes left, so we’ll start wrapping up here. But Chelsea, I think if we could give people maybe some practical steps now, obviously you can’t do too much in ten minutes, five minutes. But for people who are here who are jazzed up, they’ve heard what you’re saying and they want to get involved in doing this, as content designers, what should they start? What are some things that they should start doing straight away to make sure that they’re, making an impact and they can have some influence with this?
Chelsea Larsson: [00:48:01] Yeah. So right away, I would say get familiar with the tools. So there’s a really good course that I took. It’s through OpenAI and DeepLearning.AI. It’s a ChatGPT prompt engineering course. Maybe we can put the link on later, Patrick. But this course will kind of walk you through all of those terms that we talked about and it will get you hands-on experience building a chatbot with ChatGPT and learning about prompting. Do that, do a quick course. Read as much as you are interested in, you don’t have to go off the deep end here. But then also look in your own work. Where are you doing tasks that are kind of tedious for text generation? Where do you have to write a million error messages, a million variants of one thing, or is it taking you a long time to write up your product briefs? Like when you’re kicking off a project, do you have a part of the product brief that you have to write up? Maybe this is a place where you could create a little ChatGPT instance for yourself, not using proprietary information, but to spin up your own version of those product briefs. I have a friend who does that. He has a shortcut where he just gives it the use case, the user problem, and then it creates a project brief that the whole team uses. So I would just say learn a little bit, read a little bit, but then get in there and start practicing. And the way to practice is what do you want to speed up in your own process? That’s a good question to ask yourself, and try it out in those instances and see if the output is at a higher quality than if you had just done it manually. And that’s how you kind of know if you are on the right path.
Patrick Stafford: [00:49:59] It’s a great point. Always look at the output. Don’t be amazed by the capability of the technology, and don’t get distracted by the fact that it can do something. Assess the material conditions. What is it actually giving you?
Chelsea Larsson: [00:50:14] Do look into the responsible AI guidelines that are across Microsoft. Anthropic is a really good company to look at its constitutional AI. This has narrow AI, like what we talked about is one thing, but general AI has the capability of completely changing our entire reality. So if you want to be involved in making that future reality safer, more inclusive, more ethical, and more welcoming, now is the time to get involved. So I would encourage anyone who cares about that to see where they can start plugging in now, and we’re going to really need you in the future.
Patrick Stafford: [00:50:58] Speaking of plugs, Chelsea, would you like to plug something?
Chelsea Larsson: [00:51:03] Would I? I’ll say I would like to plug smallishbook.substack.com. Smallish Book is a newsletter on writing and design that I write with my friend V. Sri and we are turning it into a book, but please like and subscribe. And then I’ll also say a couple of folks on LinkedIn to follow. If you’re interested in OpenAI, if you’re interested in AI in general, Soribel F. She writes about ethical AI. Ovetta Sampson is a design executive at Google working with ML and AI. She’s a wonderful person to follow. And then. Dr. Timnit Gebru, a founder of the distributed AI Research Institute. These are all important people I would follow if you’re interested in this work.
Patrick Stafford: [00:51:53] Wonderful. We will provide those links in the event, and we will post as well. Now we’re just about to wrap up, but I just want to make a quick announcement to everyone who has been listening. If you have listened to this and you’re a content designer and you think that you are ready to take the next step, you may be interested in an upcoming workshop we have at UX Content Collective. This workshop is called AI in Content Design: Ethics, Scale, and Impact. And the idea of this workshop is that it will help content designers who want to go beyond basically playing with GPT or Bard or these large language models in terms of just creating some strings and actually helping you grow your influence in your organization. So this workshop will teach best practices for using AI in specific and targeted ways similar to what we’ve just discussed. We’ll also discuss ethics, a framework for assessing how AI and ethics and content design intersect, and then you’ll begin to start understanding how creating AI-powered tools, custom AI tools, can influence and impact the content design practice in your organization. So we will post a link to that in the event and we will also post it. So currently, there’s a waitlist for that. We don’t have a date, but if you sign up for the waitlist, you will be notified as soon as the first date is available. So if you are interested, check that out. Chelsea, thank you so much for joining today. I think this has been really reassuring for people to hear and to know that even though everything with AI is moving so, so quickly, and sometimes it feels like people are sort of hanging on for dear life, there’s so much opportunity here. And the strength of our practice is, or the opportunity for our practice is really, really strong. So thank you so much. Any final words before we sign off?
Chelsea Larsson: [00:53:58] Thank you so much, Patrick. And yeah, I’ll just say it is moving at breakneck speed. That is absolutely true. But the foundations of language and good communication and how humans need language to navigate their reality is unchanging. That has been unchanging through time, and it will remain unchanged. And all of you are experts in how language helps humans navigate their reality. You will always be needed in that aspect, so don’t worry too much.
Patrick Stafford: [00:54:30] Awesome. Thank you so much, Chelsea. This has been fantastic. Everyone, thank you for joining. Really, really happy to have you here. We will release this as a podcast and we will post those links and we will catch you all soon. Thanks again. Thanks again. Chelsea.