Hello friends and welcome back to another episode of the Future of UX podcast where we explore the evolving world of user experience design, tech and AI. And I'm your host Patricia Reiners and today we have a super exciting guest and I assume that many of you
already know her. Today I have Sarah Gibbons with me. She's the vice president at the Nielsen Norman Group. I must admit before I am starting with the intro here that I am a huge fan. I think Sarah is an absolute rock star. She not only has a super remarkable career at the Nielsen Norman Group, she's also an incredibly smart person.
so kind and all the resources that she's sharing on her website um on the Nielsen Norman Group website and on her LinkedIn are amazing resources for all of us designers so I am so happy to have her in the podcast and and had the honor of talking with her to
talking with her about the future of design, the future of UX, how AI is actually changing the design landscape at the moment and also hear her perspective on how we as designers should prepare, how we should use AI. The Nielsen-Dalman Group is doing a lot of research and studies around that topic, so we also talked about that.
For example, one of the areas where they had different concept coined like apple picking or accordion editing. And yeah, we really tried to understand what these concepts mean for AI products. And we also discussed Sarah's fascinating article on the four degrees of anthropomorphism of generative AI.
First of all exploring what anthropomorphism in AI means and how it can be both an advantage and a challenge for user interactions. And to wrap it up: Sarah will also offer her vision for the future of design and suggest some resources for those of you who are looking to stay ahead of the curve.
So my friends, I really enjoyed this episode. There are a lot of insights and so I actually tried to find an amazing quote for the beginning but there were so many that I couldn't really decide
which of all of these things that Sarah said are the best intro quote. So I will leave the interview as it is and I can highly recommend actually to get some like pen and paper, make notes for yourself and enjoy this amazing episode. So hi Sarah, so nice that you're here. Welcome to the Future of UX podcast. Thanks for having me. I'm excited to be here. Of course, I'm so happy that you're here.
And I'm really looking forward to talk with you a little bit about the future of UX and especially all the AI topics at the moment. But before we are diving into that, I would like you to quickly introduce yourself, tell us a little bit about who you are and how you got to where you are at the moment. Sure. So, hey, everyone. It's great to meet you. I am Sarah Gibbons. I am vice president at Nielsen Norman Group and
We are a research and consulting user experience company. So I always tell everyone we make computers easy to use. And that's kind of been our DNA for 25 years. I studied graphic design and design theory.
at school or university and came out and actually thought I wanted to be in advertising. So I started my career as a creative director and realized that I felt like I was selling my soul every day and decided to then pivot towards basically product design. So I started my career at IBM and
And I helped back in the day design kind of collaboration software in the workplace. And so I was designing for BlackBerry and iOS and Android. And I actually think that this was maybe one of the best places to really kind of
get into product design because it's such a complex system, right? We were designing for all different industries across all different platforms with a lot of different kind of needs and what a dynamic system, right? So that's where I started. And then from there, I joined a team that was kind of thinking about
the evolution of that product into kind of Watson. So early, early AI and Watson and the workplace. And so it was on a team that kind of conceived or came up with the idea of a
enterprise Slack. So this is prior to kind of Slack existing, this idea of a asynchronous workplace where you can share all different types of content and did a lot of, you know, two years worth of discovery and conceptual work there. And then this is kind of in parallel to IBM design getting funding and started in Austin, Texas. So
moved to Austin for a little bit and taught incoming product designers, researchers, and product managers about how to create a user-centered process for teams that then they were going to go and be dispatched to. So that's kind of where I shifted from, I would say, this idea of being
being the designer or the researcher on a team to teaching and activating and educating other designers, researchers, product managers, really anyone about how to problem solve in a human-centered way. And that's really kind of what I think of my expertise as is like problem solving in a human-centered way, ideally at the intersection of business. So I am really interested and intrigued
in this idea of leveraging what are traditional design theories and design processes for very pivotal business decision-making and kind of upskilling, I think people who have traditionally thought of themselves as creatives. So in the UX or UI fields and upskilling them to be able to apply that exact skillset, but to kind of more traditional business problems. Yeah.
And that's definitely where I'm operating today. So from there, Jacob Nielsen had seen me speak at a big conference and said, hey, you should come over to NNG and kind of start this area of our business oriented around design processes and theory. And so...
Joining NNG, I was actually kind of the first traditionally trained designer. A lot of my colleagues, library sciences, computer sciences, human factors. And I joined the team and started to build out this expertise around design. And I think kind of the traditional design processes of design.
I design thinking, service design, blueprinting, facilitation as a core skill set, kind of strategy and stakeholder management. So kind of this, I would say, more qualitative design centric area of our field. And then since then, I've kind of, I think, fallen into a little bit more of a business role at NNG and leadership role.
And in parallel, been starting to explore this new space of AI and what it means for our field and industry. So a very long story long, I guess. But very impressive. I think super interesting to understand a little bit your background, where you're coming from and
how you got to NNG. I mean, a lot of the listeners know NNG. You provide the best resources, always great. Also research backups for all the things that you are sharing there. So amazing articles, amazing resources. So yeah, I'm sure everyone has at least checked out one article or watched one video or so throughout their career. Really, really cool.
And I feel especially in times like these, right, where we are going through this massive shift, AI is everywhere, a lot of things are changing. How do you see the shift coming with AI? What is your approach when it comes to AI?
You know, it's funny. I, so we've been around for 25 years. This is our 25th year. It's a long time. And I think that, you know, that's a blessing and a curse in a lot of ways. And I think that in a,
in and at a time like this. It is a curse because we have to shift the way we've always done things and think about reprioritizing. But it's a blessing because we've been here before. We were conceived at the beginning of computers, right? Jacob was doing usability heuristics. How do people use these things?
And Don was coming up with this term user experience right at Apple. People should be able to take this out of the box and know how to use it. And people back then were saying, you know, computers are going to take over and we're all going to lose our jobs. And look at where we are. The industry has grown twofold.
So in a lot of ways, that actually gives me so much confidence because this isn't new, right? The industry at large was born
In this exact same kind of moment in time, there are a lot of characteristics just 25 years ago. And the same thing's going to happen, right? We're not going to be replaced by AI, but we better buckle up and learn how it's going to change and ride the wave because there's going to be a wave and some people are going to miss it.
And other people are going to flourish and really kind of probably find this whole new industry marketplace and thrive. And I think we definitely want to be the latter, right? So it's exciting. And I think that that's definitely framing a lot of what we do and how we approach AI as a whole in staying true and kind of
centered by our DNA, which are the usability heuristics and the things that we know are going to be true no matter what, which is that users are going to have to understand how they can use AI in order to take advantage of the technology.
And that is what we all do best, right? We understand human behavior and help bridge the gap between what technology can do and how users can understand that technology and then make technology do what they need it to do. And that bridge, that gap is UX. And so I think in a lot of ways, you know, we may need to kind of
shift the direction we're looking, but it's still going to be applying a lot of the same things we've always applied. Makes total sense. And you also did some research about that topic, right? So some usability research where you really looked into how people are using chat GPT, for example. Can you talk a little bit about the results that you got from the research studies?
Sure. Yeah, I think the funny thing about AI right now is a lot of people are making a lot of guesses when...
Really, we have to go watch people use it. Right. And so that's exactly what we did. And it's a small study, but very few people do qualitative usability testing because it takes time. So we watched eight different professionals and we've actually been doing a few different studies across energy in the AI space, one quantitative, one diary study, and then kind of qualitative in this case.
And we watched people how to do it. And what was interesting is we did a mix of tasks that they brought into the session. So they had an idea of how they wanted to maybe use chat GPT
and brought that in and then based on their profession what they do we actually created tasks for them so then we got this kind of mix of something they were kind of mentally prepared to do and and their mental model was already I framed around the task they wanted to complete and then some new ones and we could kind of watch them do both we also had a range of experience there were
All the participants were in some way kind of lightly using ChatGPT in some way, but some were far more experienced than others, which became an interesting thing to look at in general. So,
Across all these sessions, we watched users just use ChatGBT. And this research was done with one of our research assistants, Turin, and then Jacob Nielsen as well. So we kind of are pooling a lot of different expertise and all watching and observing these sessions and then starting to kind of
I unpack the different behaviors that were occurring. I share a lot of things on LinkedIn and a lot of people like to bring up the debate on, okay, well, what is the learning model actually doing and changing? And that's not our expertise, right? We're here to study user behavior. And that's kind of what we focused all of our energy on during the study.
Mm-hmm. And you came up with two insights, right? So two main insights that you shared in also LinkedIn. Yeah. So coming out of these studies, we actually have a lot of insights. I would say kind of eight major behaviors that were occurring. And as anyone who's done research, you don't really know what you're going to find in this type of study. You're just observing. And then it's a lot of
discussion and analysis and kind of affinity diagramming and kind of understanding the patterns, going back and rewatching the sessions and the clips and kind of organizing and grouping. And that to me is that critical thinking skill set as a researcher. A good researcher has to obviously conduct a strong and viable study, right?
without bias. But the harder part about being a researcher is being able to take all those findings and start to make sense of them. And that's really where these different patterns and user behaviors with AI emerged. So really interestingly, in kind of the first two insights that we published, and these insights are rather high level, and there are kind of some
macro behaviors occurring within them, but accordion editing and apple picking. And even arriving at these names is actually a discussion in and of itself because these patterns are
I hopefully are going to be referenced and we wanted to choose words that both were very plain in the sense of it was clear what they were, but not so plain that they're used for a variety of different things. And so accordion editing was a really common behavior we witnessed throughout the study, which was.
Users essentially taking a lot of information and asking AI to just minimize it down to three bullet points. And then maybe taking one of those ideas within that synthesized list and expanding it.
And users were doing this over and over, you know, so stretching, condensing, stretching, condensing, stretching, condensing. And there are all different words for this. There are all different kind of reasons or rationale. But the underlying pattern was what then we kind of just like an accordion instrument, right? This shrinking and expanding what we named it. And I think that that is a really interesting thing for a lot of reasons, right?
One, that starts to show us what people are finding AI useful for. Right. But behind that, it's a behavior that we oftentimes do when we're processing information. And now we have to kind of articulate or tell and prompt the AI to do it.
So I think knowing that that's a really common user behavior can mean a lot of things. Obviously, anyone listening to this that's designing a product around AI, knowing this user behavior is valuable. But I think even for all the practitioners, knowing how you may be able to actually leverage AI is also really valuable. And then designing that to be an easier task
is kind of the most tangible result from a finding like that. So accordion editing was one primary insight. Another insight was this idea of apple picking and apple picking. And we talked a lot about, you know, should it be cherry picking? Should it be broader? But apple picking is this idea of, you know,
putting a prompt into AI and getting a lot back and then picking out the pieces that you want to bring forward into your next prompt. And I think that, you know, cherry picking makes it sound like they have to be these perfect kind of pieces that you're picking, but really users aren't doing that. Users are just kind of highlighting a primary point, something that's interesting, and then asking the AI to do something with that
So I think that that is really interesting in general, that knowing that AI's strength right now, and this could all change, is giving you a lot of different ideas and then choosing those ideas that you want to move forward with.
I, and it just goes to show you that, you know, whoever's using the AI still has to be critical. It's not all going to be valuable and it's not all going to be perfect. And part of using AI right now is picking out those important pieces, but
It also goes to kind of show you that the interfaces were so early in all of this, right? That it's such a transaction right now. We put in something and AI spit something out. We put in something new and AI spit something out.
But you can imagine this tool evolving so that I can go through and highlight certain pieces and say, keep this, keep this, keep a touch of this, and then immediately have AI do something with those pieces. And so I think as we start to kind of evolve our thinking around what this tool can do for us in different industries, in different verticals, in different ways, it's
Knowing that that is a user behavior that is consistent, no matter what someone was doing, can be really valuable. And then the question becomes, will it continue to be a user behavior? And who knows? Fascinating. Really fascinating. Yeah. Yeah.
And I think this is so interesting to look at that from the UX perspective when you are designing these kind of chat interfaces like ChatGPT, for example, right? Where you did the study on. How can you be sure that the user picks the right apples and not the rotten apples that, you know, because I actually shouldn't pick. So that's a challenge, right? Yeah. And, you know, who knows? I think you have to design...
For the common, we have a quote that's users aren't lazy, they're efficient. And that's not...
their problem, it's ours, right? So anything we're designing for, I has to take that into account. And so it does, it becomes a really, you know, whether you're designing something with AI in it, which I think is great, but that's probably the minority. But actually what I think a lot of the people listening are probably trying to use AI for what they do. And I think that knowing that these are really common behaviors, it's one, are you picking the right things? And, and
And is that something that maybe you should be doing more of and giving AI more direction? And then are you or do you have opportunities to use AI to condense or expand different things you're working on in a way that's going to be really cheap and fast and then also extremely valuable? And I think that that is one of those interesting tips that I would give everyone. And I think a lot of people are asking, how do I make sure I'm not left behind?
And my answer is use AI. I don't care what it's for, but dig in, use it, play with it, play with it.
push it. And I think that's one of the best things you could be doing right now. My husband and I were trying to make dinner the other night and I was like, oh, we have these random ingredients. And I said, put it into chat GPT. Let's see what it says. Or we really liked this movie. Let's find other movies like this. Let's play around with it. So whether it's at work or at home, become fluent in AI. And that doesn't mean that you're the best at it,
But it means that you're jumping in the deep end and you're using it and you're pushing it to its limits. Because I think that that to me is what most of us, the role most of us are playing in and with AI right now. Mm-hmm.
I totally agree. I think that's the best thing that you can do, really play around with it. But I'm unfortunately seeing a lot also at the moment is a lot of people talking about it, talking about it. But the way they talk about it shows that they have never really used it. And I think this is really sad because what we need is the critical thinking of like,
this is going super well, there you should be a little bit cautious and find your way of using it, right? Like you mentioned with the movies or with the dinner, I mean, amazing use cases where it makes so much sense, right? But how do you actually get started with using AI? So when you are a newbie, maybe you have used ChatGPT a couple of times, you want to integrate that in your workflows, but maybe your employer says, be a little bit cautious, you know, you're not allowed.
How do you get started? What would you say? So I don't think that there's a wrong answer, to be honest. I think anywhere that feels the most comfortable to dip your toes in, start there. It's kind of like a...
you know, the workout and this is such an American phrase, I think, but you know, the best workout routine is the one that you do every day. So it doesn't matter how intense it is. It's just do it every day. And it's kind of the same thing. I, we have these slogans. Actually, this is a great example. If you're watching the video, I have this slogan behind me that says it depends, which is one of our UX slogans. Right. And
And I create it, kind of create all of these visually as a fun hobby at NNG. While on LinkedIn, I put up an example of six different slogans, one being the one I created and five other ones being the ones I tried to get AI to create for me.
Right. And that becomes really interesting. Right. And I'm playing around with trying to figure out how can I get AI to give me the best design for this slogan? And then I put all six up on LinkedIn and I asked everyone, which one do you think is mine? Now, this is really interesting, very sobering. You know, a lot of people said ones that I would have been a little bit offended they thought I made. But it just goes to show you right that.
One, you just have to use AI and try to make it do something. That was a very humbling experience for me to even try to get it to produce these things. Two, I think what people see in terms of being AI generated or not is really interesting. And then three, the differentiating factor being taste. I'm going to write an article about this, is that
As more and more people are able to create things, the factor, our differentiating factor as designers are one, critical thinking and two, taste. Being able to look at something and say, this is good or this is bad. And as more and more people can create things,
anything they want. There's going to be more and more noise and content out there. And the only thing that's going to differentiate it is this idea of one, it being validated and, or you've applied critical thinking to that output. And then two, it being good, like actually quality wise good. And so it's been really interesting, but that was a great example of
somewhere to dive in. I hadn't really played with creating images with text in them. And I
it was a great moment to kind of see what I would say. So depending on where you are and what you're doing, you know, throw something, throw a wireframe in there and ask for feedback or put in, you know, say you have an image that you're trying to create and document the different prompts that you use and observe kind of what best practice you would advise. I think
Right now we're in a exploratory phase, right? There's nothing that is not going to be useful. And that could even be a total bust of you trying to use AI. Yeah. Being open about it, right? I think it's also really valuable if people share learnings that they had, things that didn't work, right? So, okay, this is not how you should do it.
being open about it this is what I really like also on LinkedIn when people are sharing things that are working out I really like that as well yeah we actually I have an article coming out and Jacob also published it which is basically we combine different tips from different practitioners that we had heard in terms of best practices within kind of the UX field
of getting started with AI and what people have learned. So I'm sure maybe we can put that article in the show notes. Sure, in the show notes, for sure. I'm happy to link it there. Sounds super interesting. And I mean, it's not only text that we are creating. It's also images, it's audio, and also interfaces.
And understanding. And understanding. Like true information and understanding we're also creating. And I think that that shouldn't be overlooked. So even though all these tangible things can be created, I also think that it's conceptual things that AI can also create for you. Yes. And yeah.
Like, what do you think, like, how will the role of us designers will change now? I mean, of course, you can't look into the future, but from everything that you've seen in the last years and the testing and research that you've done recently. Well, I don't want anyone to steal this from me, but I'm working on this idea or concept right now. And this was kind of based on the studies that
And then kind of based on anecdotal evidence that I've heard, which is I think a lot of designers and researchers are using AI in kind of like two buckets. This like divergent bucket and this convergent bucket.
And this divergent bucket is using it early to explore, generate this concept of quantity over quality. I'm using it for ideation. I'm using it to create a bank of research questions. I'm using it to expand on this one concept and give me a ton of other concepts. And that to me is, and I guess in a lot of ways, this is accordion, you know,
problem solving on a really macro level because you're expanding, you're pulling it all the way out and saying, okay, for really cheap, I'm going to gather all this different concepts, ideas, et cetera, that are context specific.
That's the difference versus pulling and going through articles from Google. AI can give me something that's context-specific almost instantaneously. And so I think a lot of designers, it's extremely valuable there. And then the process, our process becomes looking for patterns, becomes deciphering what is good and bad.
applying critical thinking and saying, okay, of these 20 different things, these three ones are interesting. There may be something there. And then taking them into, I think, our process that we've always had.
And then I'm also seeing AI occurring in this like convergent state in this like, okay, I've applied my critical thinking. I've applied my craft to the job at hand or the task at hand. And now I want to synthesize and kind of complete it, get it ready to communicate to others, get it ready to maybe publish something.
Etc. And I think that that convergence stage is a completely different area of the process. And AI is also really good at where it can take a lot of different things and kind of quickly package it up. So I think that.
That to me, depending on where you are and what you're doing are kind of two very easy places to start using it in your process. The very beginning when you're diverging and exploring a lot. And then the very end when you're getting ready to package it up, communicate it, send it away to someone else. So that to me is like two really great areas to start.
expand into AI when I really think the meat of our process, the actual finalization of our recommendations or the final wireframe or interface that you're designing, the final research plan and the order of questions is
AI is not going to do that for you better than you and your craft can do that. And so I think that that's where you have to leverage what you know, take all those inputs that you maybe used AI to help you generate, and then apply your rigorous thinking to it. And I'm afraid that that's where you're going to be able to see people who can and cannot do that, even if both are using AI. Mm-hmm.
super interesting. I really loved how you broke it down in the two areas. It makes so much sense. And I realized from my own use of AI, I really like to
prepare an article for example or write a concept and then I ask ChatGPT for like a change, summarize it, change it, spell check, all these kind of things. And from my experience the results are much much better than when I try to craft a perfect prompt and expect a great result.
Although, yeah, so that might change in the future, right? But at the moment... Maybe. And I think we may see that as AI gets integrated to more of these task-specific tools. So we may see this more seamless experience from start to end. Grammarly is doing some really interesting things with AI right now. And that's a good example of AI being applied to a very specific tool or task.
So, you know, maybe eventually if you're working in Microsoft Word or Google Docs, AI can influence it in the place that you're already working and be more of a player end to end. But I think where we are right now is it's not right. So it is this like very task specific thing. And I think the people we didn't quite quantify how close or well each participant executed tasks.
their task or goal in the study. It wasn't that type of study. But I will say anecdotally, the people who were most satisfied with their end outcome did a lot of taking the output, putting it somewhere else, playing with it, and then putting a new input into AI. And so I think that what I've seen from people who are really using AI and satisfied with the outputs, it's this like,
um transferring you know you pull stuff out work with it put new stuff in pull stuff out work with it put new stuff in this is so interesting right because when you look at the interface of touchy pt
it's not optimized for that you only have like the input area where you put something in of course you can copy things in there but you can't really edit stuff so you need to have another tool where you copy things in right and then copy it and paste it into touchpt again so it's super interesting that of course I mean it's early stage but these kind of workflows no one cares obviously so
Like, there's no option to use them in a good way at the moment. It's not like back and forth and switching between different softwares and different tools. So I think there are definitely some opportunity spaces. Definitely. It's also why we've focused on looking at these patterns of behavior and not a usability review of these tools.
These tools are going to change. They're maybe going to change already by the time someone's listening to this podcast episode, right? And that's a known. And so if those things are known, what is most interesting to understand right now? And it's people's mental models. It's people's core behaviors, tool agnostic, output agnostic, whatever.
What are kind of the thought process of using AI? And I think that that's really why we focused our research on that side of things rather than the tools themselves. Because if you focus on only the usability issues within the tools themselves, the tools are so limited so that our conceptual thinking is going to be limited. For sure. Yeah.
He also shared some great articles about different behavior models and everything. I will also link it in the show notes so people can check it out. Super helpful, I think, for everyone to dive a little bit deeper into it. And I just want to switch to another but also very interesting topic because I know that at least in Germany, a lot of people are a bit skeptical when it comes to AI.
People here are very cautious when it comes to danger and people are overall I think a little bit more cautious, which is from my experience not the best way. I mean, curious but cautious I think would be good. It depends.
Love that. 100%, right? And you actually did a second study, which is super interesting, where you focused on how people actually work with AI. So a little bit more the emotional part, I would say. Can you talk a little bit about that study and what were the basic learnings from it?
Sure. Yeah. So we did a, basically this kind of came out of a combination of a few different studies that we had run at NNG and it was this idea of anthropomorphism, which is a big word, but it basically means the idea of treating AIs as if it's human. And I, there are a lot of reasons people do this. And we kind of identified four
what we call degrees. They're not necessarily layers or levels in that, you know, one or two degrees could actually both be applied or in use at once. And they're not kind of mutually exclusive. And these degrees range on the dimensions that I kind of identified as connectivity. So emotional connection to relationships,
the AI, and then function. And so for each of these degrees, they achieve a certain emotional connection and a certain function. And so the first degree we kind of called courtesy, which is this, you know, manners, polite, politeness markers, thank you, I,
Hello, I'm sorry, those types of things. And I think that that's interesting in and of itself. We saw things kind of like this early, you know, anthropomorphism is not a new concept when it comes to technology. The Eliza effect is this MIT study back from the 60s where the computer kind of played a therapist role.
And people anthropomorphized Eliza. So this isn't really new, but we're just seeing it, I think, applied way more broadly. So this courtesy degree, which is the first one, is relatively low function and also low emotional connection. Since actually publishing this article, I've heard a really big theme and anecdote about people not wanting to lose the habit of being polite.
And I think that that's fascinating in general. That's not even in my article, but this idea that people already are switching back and forth between AI and humans so frequently that they don't want to then not be polite to the human because they're so used to being demanding of the AI. And so I think that that context switching is just a really interesting idea, especially as AI is in more places.
You know, AI may be eventually checking us out in the grocery store. AI may be checking us into a flight. And so this courtesy is interesting. So that's kind of the first degree of,
here with please, thank yous, et cetera. Good mornings. And then the second degree is reinforcement. So this is basically positive affirmation. So when the AI has done something well, you know, it's like a child or a dog. I'm training my dog right now. Good job. Good boy. Basically the same thing. Good job. This looks good. And basically reinforcing and
and output that aligns with your goal. I heard a lot of this, a lot of people do this. There are hypotheses that, or a strong hypothesis is that the AI learns and performs better
with both of these. So we heard that people anecdotally think that the AI is more polite if they are polite to it. Same thing with reinforcement. If you say good job, then it increases the likelihood of your future success. We don't know. It's not proven. I'm not making any strong remarks on that, but that's certainly kind of anecdotally what we hear. So the reinforcement I think is significant
Equal kind of low emotional connection as courtesy, but a slightly higher function in that people think it's going to functionally increase their success. And then the third was really interesting and something that I hadn't personally been using AI for, but it's role play. So it's assigning a specific point of view to the AI.
That you want them to think, contribute, give you responses back from. So we saw things like, I want you to act as a project manager. I want you to be an interior designer. And it's assigning these specific points of view to the AI so that anything they give back is from that point of view. One could view this as...
kind of context setting, which makes sense, right? That to me is really logical. What's interesting to me, and this is kind of my aha, and this happened as we were analyzing data, which is I kept kind of when we would dig on that behavior. So with a participant, when we would ask them, can you speak more about your prompt and the different components you included?
They would reference this idea of it being easier for them to think about the AI as an actual person. Right. And so it's this like cognitive bridge that they're trying to close. Now, where else in our history of the industry have we seen this cognitive kind of need for a bridge with skeuomorphism?
And this idea of when we first started using digital products, they had to look like everything that we were used to. The classic example being the bookcase for your digital library that had a wood grain on it.
And it was to close that cognitive bridge of like, oh, now these are books and they're digital, but they're the same idea of this thing in person. And so it's this question, and this is more of a question. This isn't even, I would say, validated statement, but it's
One could think about this era as like an evolution of skeuomorphism into like prompt skeuomorphism or this kind of next era of trying to bridge the physical with the digital. And now we're seeing it by assigning the AI a very specific role or context. And I think this, if I had to kind of
I don't know if this is going to be dated, which is great, but I think that there's going to be a whole marketplace related to this, related to the idea of AI playing a specific role, a specific subject matter expert, a specific service industry expert. I think we're going to see an entire industry that evolves from this idea of role play.
Interesting. I think like Meta is doing, I mean, they're trying to do something similar, right? Where you have this AI chat and you can choose between the different people. It's not working perfectly, of course, but it's going in that direction. Totally. And why is that valuable? Well, they probably did research and realized how helpful it is for a user to be able to assign a very specific point of view.
And that's kind of how these things start to manifest, right? Like one could read this article for fun, but really what we're saying is like there is a user behavior that makes AI easier to use when people can assign it a specific point of view.
Fascinating. But I think it also makes so much sense, right? Like, imagine, like, when you're going to a doctor, you know what to ask. You wouldn't ask about like, what should I wear today? Right? You know, that was the time to ask everything about your health. I think a really sticky thing that this starts to get into, and again,
We I as a researcher and energy as a company, we're in the business of identifying user behavior and not necessarily, you know, insinuating the effects of that user behavior. That's a whole different realm of of study, actually. But I think what's really interesting is starting to question, like, what does this mean for trust?
And I can imagine a world where this idea of role play can be really positive in a lot of ways and extremely detrimental to humanity and others based on the idea of trusting the system as if it were human and as if it were this expert, potentially even more than you trust an actual human. And it's going to be positive and negative. And I think just an interesting kind of layer that
I do not claim to have any idea of what is going to happen as like the repercussion from a lot of these usurp behaviors.
But I think that's a really good point. Thinking about the ethical considerations, also the problems that are coming with AI tools, JTPT, apple picking, like, you know, picking the right apples, making sure that the outcome is not biased. Also apple picking is easy for senior lead designer, but a little bit more difficult if you're a junior and you don't know, all the apples look the same and you don't know what to pick.
So I think there are a lot of questions still that need to be answered at one point and also educate people on how to use AI in a certain way, right? But I think Jacob shared also something I think yesterday on LinkedIn that was really interesting with
some tips around how to use AI. And one tip was if you're a junior, ask your senior manager before you pick any important results, right? To get some feedback on it, to also learn through the feedback from the senior manager
or you're like a senior product designer, what are the right apples to pick, for example? No, you're right. You're right. And I think, you know, it's the same thing. I keep making this metaphor, but I just, I don't want people to be scared because it's the same thing with computers, right? We all have the grandma or grandparent or parent who uses a computer wrong, right? And it's the exact opposite.
So, you know, of course there's way more nuance here. And I don't, I think using this wrong is as tangible. And I think that that's where it gets much harder, but the idea in and of itself that like, ultimately we have to be the drivers of this machine, right. Is still there. And maybe even more needed and apparent than ever.
And then just briefly, the last degree that we found was this idea of companionship. So this to me is taking role play, but pulling it all the way to the end of the emotional connectivity spectrum and basically saying, okay, I'm going to have you play a role, but for my emotions sake.
for my emotion companion my emotional companionship I and this is the the friends the companions the like assigning AI to actually have feelings and emotions um and this is going to be an area that becomes really interesting uh in general I think you can imagine it in gaming it's going to
totally rock it in a lot of these like secondary industries. It's going to be really interesting. And then even all the way back to the MIT study with Eliza, this like therapist oriented point of view. So yeah,
Yeah, and of course, these are just kind of the first two more official published findings from my research. We have several others around information forging on NN Group, AI as a UX Assistant, which we are about to publish findings from a study. So we have a lot of kind of interesting things coming out from researching the space. Amazing. So a lot of things to look forward to, I would say. Sounds super interesting. Yeah.
I mean, you already mentioned a ton of tips throughout the interview, so a lot of helpful things for your ex-designers. But to wrap up, are there any main tips that you are giving your ex-designers at the moment to really get ready for what's ahead of us? You know, there's so much content out there that I think
I don't know about all of you, but I don't think it's very helpful to say like, go read articles, read the news. You know, there's so much. So I really think it's, it's to use it. It's to make your accounts. It's to get in there and actually start practicing and experimenting. And if it's feel safer to do that,
in your personal life, you know, for the recipes or the movie recommendations, great, do it. I think if you can use it in ideation, that's a very low impact place because you still have the rest of your process, uh, to validate anything that comes out of it, but, but it really is to use it. Um,
And I think specifically use it in your process in places that are going to make you look better, that are going to make you have a higher quality output. And that to me is the tip, which I know I mentioned, but I'm hyper fixated on it because I think this is the moment where you see some people catch the wave and other people kind of left floating in the ocean. Mm-hmm. Sad.
But it's not too late. So people can still catch the wave. We're at the very beginning. You'll still be catching the wave for another year at least. But I do think the confidence right now is going to help as tools get more and more complex to design for these behaviors because they're going to.
yeah makes so much sense so everyone who's listening go out there try things out yourself and if you're willing to share the learnings on LinkedIn or Instagram um but honest learnings I would love to see it I'd love to see what you're doing yeah me too um so Sarah where can people find you uh what is the best place
I think, you know, obviously NNgroup.com has and is where I publish all of my findings. So many other of my colleagues are also running amazing studies and publishing findings all for free. So NNgroup.com and on LinkedIn are great places. The best place to find me is on LinkedIn, Sarah Gibbons. I am there. You can tag me and I'll see it and comment.
And then, of course, we're ending group is also on Instagram. So we've been starting to move into Instagram and actually also YouTube. We have a podcast that talks not specifically about the future of UX, but kind of everything within the UX field. And we've just started to post YouTube shorts and such. So yeah.
come engage with us we would love to to touch base hear about you uh field questions etc nice
Thank you so much for not only for all the amazing resource that you're sharing on a daily basis for years on all different platforms that are so valuable. But of course, also for your time for this podcast. I really appreciate it. I'm so happy that you took the time. Could you tell that I could go on this forever?
I mean, you are happy to come back anytime, just that you know. Anytime, if you want to share something or just have a nice chat, hit me up. Sounds great. All right. Thanks, everyone. Thank you so much, Sarah. And bye-bye. Bye.