We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode #133 How AI is Transforming UX Research w/Tina Ličková

#133 How AI is Transforming UX Research w/Tina Ličková

2025/4/30
logo of podcast Honest UX Talks

Honest UX Talks

Transcript

Shownotes Transcript

If you are in your human body as a part of an interview, you have much stronger emotions about the interview and you tend to be also much more attentive to interviews which you attend personally. You empathize with the people

And empathization experience is not happening even through research readouts, presentation, or even videos, podcasts. I tried everything, trust me. It's happening through being there, having the experience. That experience is untransferable.

So we still need to be talking to real people. We still need to make effort at least from one research study to see one or three or five people to talk to them to actually have this kind of physical reaction in form of emotions, because that is actually something that stucks with us. And that is the magical moment that gets lost if we do it completely outsource to AI.

Hello everyone and welcome back to Honest UX Talks. And today I'm joined by Tina Lychkova, who we will be talking and unpacking a juicy topic called how AI is transforming UX research today and what we can expect moving forward.

And I'm bringing Tina today because I believe Tina has a very strong opinion and interesting takes on how AI could be used in the research today and what we as designers can learn also, especially if you are practicing, you know, running research, using research in your day-to-day job. I think it will help us to build a better perspective hearing what she has to say about the current state of things, especially when it comes to AI and research.

But before diving into today's topic, I also wanted to send shout out to VicStudio for sponsoring this episode. VicStudio is an intuitive platform made for designers and agencies who want to create exceptional websites without fighting the tech. You get full stack capabilities, multi-site management, and smart AI features that make the workflow go smoother. Now, here is the part that I think more designers need to hear about.

especially if you're job hunting or you're currently between the gigs. You can use this time not just to polish your portfolio, but to actually start earning passive income from your design work. And one way to do this is by selling templates on VIX marketplace, a solid way to turn your design skills into the real income.

By the way, recently in the community, one of the members started selling templates and started making passive income already. And the cool thing about it is that once you land a job, you'll still get the passive income or, of course, attract the new clients if you want to keep freelancing. Definitely go ahead and check out VIX Marketplace. You'll find the link with more details in the show notes. And now let's go back to the episode.

Just a little intro before we get started with this cool episode is that Elina is a seasoned UX researcher. She's also a strategist and also she's a host of the UX Research Geeks podcast, which you might have seen online. She's bringing a lot of researchers to the podcast. So there are a lot of already cool content out there. If you want to check it out, go ahead. Also, Tina has a pretty diverse background. When I checked it on LinkedIn, you could say very various backgrounds, such as, you know, analyst,

I believe you worked at agencies, research ops manager, working in different environments and different business contexts such as meteorology, education, finance. So a lot, a lot of background, obviously. And I believe that gives you this mature perspective into what you hopefully will be sharing with us today. Anything else I had to add here that I haven't? No, really nice introduction. I'm really happy somebody did it for me. Alrighty, perfect. I'm happy I didn't mess it up.

Awesome. Well, let's get started with just a little, what I really like to do when we have guests in the episodes is to just try to a little bit better understand what brings you into research. I know that you have a very diverse background, but I'm always curious how people usually get into current destination in terms of research, and now you work as an independent

I guess, freelancer or maybe consultant with some companies. And so I'm very curious, how did you get into research? What made you interested in it? Because you have a different background, original works. What made you stay here? So I would love to hear a little bit more of the motivation into getting here. Just before I go into the motivation or how the story with me and research started, I'm also in a big transformation. I don't know how to name myself. I wouldn't say I'm trying to become a product manager, but I'm definitely a product

person because also my work is showing me that I am a product consultant or strategist whatever you name put it on it usually people don't know like what is a strategist but research was when I was working in an advertising agency and advertising I felt like oh this is probably not for me but there were and it's this old story of Facebook apps were starting at the time and my clients just didn't understand how they work and but we were using them a lot

you know, to competition amongst the customer and stuff. So I started to build PPT like presentation, like, oh, this is the first step, how it works. This is the second step, third step. And then I was like, wow, this is fun. Cause I like to know how things work and how to build them. And I already was like, okay, the product world will be mine.

And then the second step was like, okay, let's explore this. I found UX. I started to look for jobs in UX and it naturally became like, oh, okay, people can pay me for asking questions and making sense out of it. So that was basically the moment where I

I was like, okay, maybe I go strategies, researches role and it's various in my life from month to month sometimes nowadays, but also from role to role. And then I started to explore research roles, as you were mentioning the research ops, which is a nice part of my job, but I realized it's not administration, but

more administrative for me than I would like to. I do like to hear this perspective because I feel like many of us today, no matter what's your current role, what's your current bubble or set of responsibilities you're given at your current job. But many of us are actually challenging our roles today and thinking about like, oh, what will my role look like moving forward? Right.

And while we, yeah, exactly. Like while we used to be okay, here is the product manager. Here's the product designer. Here's a researcher. Here is the developer, this and that. Right? So now we were very like fragmented in a way, like we had our own set of responsibilities, but we all see the change and transformation happening today on the market. And we all ask ourselves.

What are our strengths today? And I like that you also mentioned that you were reflecting early on as to, I seem to be leaning towards like products and strategy and it brought me to research now. But again, moving forward, we see the transformation happening in the market today already. And we all ask ourselves, like, am I more in design bubble? Am I

more of a craft person? Am I more of a product strategist person? Am I more of a builder? What kind of direction am I taking moving forward? Which kind of strengths I want to double down on? And I think indeed, I fall into the same direction as you are mentioning, which makes me more interested in talking to now. Yeah. I mean, if you are not, for people out there, thinking about how your role is changing, wake up.

I hate advices, but this is probably like, if you still think that in a year or two, it doesn't matter if you're a researcher or designer, your work will be the same, even in very old fashioned companies, then you probably are not looking at

what is happening right now very well. And it doesn't mean that I'm blaming somebody because some companies make you really slow even in your own word adapting stuff, but it's going to change super dramatically, I think. For sure. We have to be aware of at least, or we have to be observant about the things and where it brings us. It's nice to get comfy and like settle in what you have, but the world is too dynamic these days. Anyways, let's go back to the topic of today's episode.

how AI is transforming research. And we might want to start from just trying to focus on what's currently happening, what kind of trends you are seeing. Maybe let's start from the research, but then also try to see how this impacts just the product IT or whatever other product industries that we could look into from your perspective. So what's the current state of AI in the research market, as well as how does this translate into the product if we think more high level?

I did the most obvious thing that I could do and I spoke to Chad GPD about what is the future and what is the current state of AI in research. And it's actually fascinating that I think that AI itself doesn't really follow up on the changes and the speed of the changes that are happening in our business. So it's like, oh, wait a minute, this is already somehow happening. So nowadays it's like you can read

really augment every step of the research process with AI. It can help you from the preparation till the last report, right? And we can go down into the detail. What do we see happening? And I think what was one of the biggest wake up calls for researchers was when synthetic users came out and people were just like, no. And I think my initial reaction was also like,

never, this is just really bad. But we now see where it's going to take its place and how and it's very relevant as well. And what we see now happening is that it's not only the part of analysis where you see research repositories being AI driven more and more, but it's also taking the conduct of the research. So there are more and more tools appearing which are

are voice-based, which are trying to figure out how to actually speak to people through AI assistance. This is probably still at babies or embryos level of those tools, but it's really nicely showing that the changes that are happening now are even coming sooner as predicted by more technical people.

and that we need to figure out somehow, okay, what do I take now? Because a lot of things make sense, a lot of things you can test them, but that's probably it. So what you are saying is basically we are seeing the massive change mostly on the voice and speaking tools, like market or sphere. This is probably where I feel like it might be a surprise to some that it's already happening because it was a

believe that "oh, we have to conduct the stuff on our own". It was a matter of time when it's going to happen. I think it's probably coming sooner than people anticipated. I think it's at the stage where we can't really use it that much or we have to be following it through and we have to use it just scarcely or in small amounts.

but it's actually showing us, okay, this is where we can predict that it's going to take also the conduct part, which was the last one kind of like, oh, it can be done by AI or it can be done well. It's well enough to try it out on some aspects of your research projects. Iona was talking about it in the episode on your podcast. The analysis part is already like, oh, we are trying to figure it out. We are trying to figure it out in the

ethical space. We are trying to figure it out, how to make sense of it. We're trying to make sense of it like, okay, it does analyze very nicely one interview, but not five interviews because it can't really put things together, connect the dots. And then we also see some stuff in the preparation, but that is basically if you are using AI or not.

daily level that is something that you can kind of like workflow on your own like you know writing scripts or preparing surveys it's more about and this is probably going out of your question where are we now and there may be better people who are actually running some tools also i forget to mention recruitment is being automated very heavily you just fill in what you need to fill in or even don't because you just put a brief and it can

get out what kind of respondents you need. But where I'm missing the thing is kind of like preparation on a high or strategic level of a research. I think we might want to stop a little bit for a second and still talk more about the current state and what might it give us as a signals moving forward. So you did mention a couple of interesting points. First point, when it comes to research, the easiest thing to do, even to anyone, product designers or PMs, obviously the other people who also do research are very often

Obviously the easiest thing to do is to chat GPT the list of questions you should be asking your users based on whatever goals you might be having. Right? So the first question I have to you based on this, do you yourself sometimes chat GPT the questions or do you still feel like, no, we need to be more thoughtful. We need to think through this before we even go to chat GPT and just lazily outsource this part. Like what is your take on, you know, preparing for the interview as the first thing

It really depends on the day, I would say, like how much I feel in the mood of. But I feel like ChatGPT became my research colleague. I am a team of Tina and ChatGPT, for example, or Claude or whatever tool I decide to use on that day. But sometimes I feel like, oh, I just want to get a grasp

on what do you think should I be asking about this topic? But of course it has to be well-prompted, blah, blah, blah, right? We don't have to go in those details. But I want to get the inspiration. What I see if I do that, and that's the preparation part that I'm talking about, it's this level of it asks quite good questions, but not as good as I would frame those questions.

Because there is a specific syntax in which you want a question to be asked. And sometimes you even feel like, oh, chatGPT, for example, it makes this thing of like asking a closed question instead of, for example, a nudge, which is for me a slight difference. Like when do I ask a question and when do I want to nudge somebody to tell me something? It's completely intelligent to do a nudge instead of that.

ChatGPT is like, oh, I have to ask questions. So it doesn't really make the difference. But what I started on a daily basis to think with ChatGPT about the questions, like, what do you think about this? How should I phrase this? How do we not imply, for example, the future prediction bias into it? Even if we want to know about the future, what the user or what the customers wants to

try to do or what they wish to do we don't want to ask about the future so that's where i go into a heavy conversation with judge jupiter like what are the other ways to ask it in a way what framework of questions or what framework for the scenario or even methods would you use and that gives pretty good

results. Yeah, that makes sense. I like the idea or the point that you made here that it really goes back to the prompting and how you're asking the questions essentially, because, okay, if you give them help me to prepare questions, of course, the AI is not right there in the context to understand, okay, we are going in that direction and you might be needing to redirect or nudge or whatever, right? So it's almost like this semi-flexible interview. Semi-structured. Semi-structured, yeah, exactly, right? It cannot be semi-structured at all.

Basically, it could only give you a very set amount of questions that you might want to go through. But without experience, you might not always understand how to navigate through the interview. And it's not just the experience, but it's just being aware of, oh, what could be happening if I ask this question? Because there is a certain thing about you are writing the scenario with prediction in your head. What could be the answers and how?

how I want to continue the conversation, two questions after this. And this is what you see that ChatGPT obviously can't do right now, although it's getting there. And the conversation that I had with ChatGPT, for example, two years ago and are completely different than I have now. And it's just on a very different level from its size as well. But sometimes I feel like, for example, when I'm trying to build some surveys, I'm like, oh, wait a minute.

you are basically really giving me the question in the form of where future bias is just implemented into it. I take future biases as one example, where the future bias is just implemented. How would you ask it differently? So you really need to, as everything what you do with such tools, you need to iterate on it and you need to have this critical mind and being aware of biases, behavior of people,

psychology that might appear and stuff like that. And this is where I feel like, you know, there's a lot of hype talk of AI will take our roles, it will replace us. But again, as of right now, I cannot imagine AI to get to the level of consciousness and understanding and like nuance of things to be able to like really understand the context so well in the moment of conversation with a human to quickly like start understanding how to react and how to ask the next question.

We might get there. I wonder if you think so, but I cannot imagine it right now in the next maybe year or two or three or five. I don't know. Honestly, I can imagine it. I can imagine it because the leaps of like how it's getting better in the natural conversation. And I love to talk with Chad GPT.

her voice, it's just amazing. It still interrupts me when I wanna explain something, but it gives me the empathetic talk. It's super supportive, right? It has this positivity bias of like, sometimes it's even pep-talking me, like how awesome I am. And I'm like, I don't need to hear it. Like, but continue of course.

It could do this job really well, but I was, for example, trying out a tool which is doing interviews per voice AI. And the thing was conversation was really nice. There was just one slight delay on, on

like seven seconds I was looking at it when it was counting where I was like, okay, what is happening? Like in a natural conversation, you are not for seven seconds quiet, but it was asking me to get to know me like third or fourth question was like, what is the biggest challenge in your life? And how did you master? And I was like,

way too soon, way too soon. I remember one user, for example, telling us at a workshop, if you ask me that question in the beginning of my app experience, it's like going on a date with a guy and asking me to marry me, you know? And I was like, oh, okay.

And this is what actually the voice assistant did to me. Yeah, interesting. I mean, it's still calibrating, I guess, but I see your point here in we can actually get there. I think this voice, when you are interacting with the AI, when you are talking to them, not like writing and getting the analysis done, blah, blah, blah, here is the output, but really like the interaction part of it. And when you're trying to have a conversation, it tried to be like a human thingy,

But I also feel like it's not there yet. However, I can see what your point in terms of getting there. We are not able to see the future here, but I'm very curious if we will get to the point when, like you were explaining, like what we as humans need to do. We need to be very aware of the situation. We need to understand where the bias comes in. We need to understand how to nudge. We need to understand how to redirect, et cetera, et cetera. I can imagine that today it could be empathetic.

But I wonder if it will get to the consciousness level of being able to understand the situation and strategically in a way or in a way that it needs to get back to the goals, ask the questions. So that's what makes me very curious. What's going to come next, actually. I even started to listen to some philosophical and physics podcasts and understanding consciousness and AI.

just to have this understanding of like where it could get into consciousness but if it gets that's probably the point of singularity and we are all screwed so you know it doesn't matter if we are talking about it or not it's going to happen all righty that makes sense

Okay. It reminded me of one movie that I really loved when I first watched it. It was a long time ago, it was like 2014 or something like that. The movie is called Her. Yeah. Yeah. AI was already a term, but I don't think everybody thought about AI being a part of our lives as we live in today. And you know, when in that movie there was this voice of the woman who was basically just a computer talking to a man who was feeling lonely.

And it was so human. And so that the man, he built a relationship with this AI and then he was very devastated about this not being a human essentially. But anyways, the point is that it was already so real. And it's interesting that now we're living the life where we don't know we are getting there or not. It's like everything is in action at the moment. But it goes to show that, yeah, it could be so theoretically, of course, it could become so smart as

I love that you're mentioning this movie because there is one particular moment in it where you get kind of flabbergasted by, or I don't know how to say it, but it's a moment of aha, where you are observing Joaquin Phoenix and I think it's Carol Johnson talking to each other. And then he finds out that she is talking to other 11,000 people.

And that's the point where it's like, whoa, I am not original because we have this point where I want to be special to somebody, especially to somebody I love, especially to somebody I have a romantic relationship to.

This is also one of the reasons if you go really deep into the needs of people when they are having trouble with some product or service and they are trying to figure it out, they start to speak to a chat and they feel like they are not treated well from the beginning because you, for example, push them into the chat conversation.

You don't give them the special treatment they feel that they deserve. And we have also very personal relationship to brands. We sometimes still like, what is this telecom doing to me? It is a person. And that is like, basically studies are showing us this is how we treat brands. This is how we treat products. We can put voice assistant to it, but there is this ethical thing of like, do we tell the people that there are no special and this is an assistant or do we don't tell them?

And then you have other ethical stuff that I don't want to go into it because there are way wiser people on the planet to talk about this one. I love that you mentioned this personalization bit or we're still humans. We want to feel unique. We want to feel recognized. Sometimes we want things to be

to be tailored to us personally. I think the need there is only growing every time when it comes to products. And so you do mention some interesting points. We'll see how this plays a role. Let's also talk about a research site and the current state of it. So we just talked a little bit about like preparing for the intro.

review. And then you mentioned an interesting point of possibly AI stepping into the role of also conducting the research. We were just discussing like navigating through the research and adapting to the questions and structure need in the moment. And I also remembered this point of very recently, I was watching one of the live streams of the Lovable. You know, Lovable is this vibe coding tool where you can just give a prompt and it will code for you some product.

And they were doing a demo with some other integration tools. One of the tools they mentioned was AI chatbot used for sales. So they just programmed the tool that will be calling you and trying to sell you something basically out of the design prompt.

Which is crazy because again, it's also AI chat. So there's already a tool which is all about AI chatting with you and you can program the way you want this to use it. For example, you have a CRM and then you want to sell something to the list of the contacts you have there. And this AI tool will call you and try to sell you and you can give them prompt how you want to do this.

And on the live demo, they basically did like a fun interaction. It was fun, but it was also scary and spooky in a way that they programmed it like, Hey, call me right now and sell me this pen is if you were the wolf of the street or something. I don't remember. That's the pen exercise, which is like, yeah, exactly. So they were literally calling you and trying to sell you the pen, just like that wolf in the movie. And so.

It was scary, but also I imagine like you can theoretically program the AI to do the research in a certain tone of the voice. Like you can personalize the whole conversation in a way with the AI and whoa, what does it even mean for us? What is your take on this? Like, do you really think that the whole doing the research part, like doing the interview part could be automated, programmed, prompted to be done well? What was your feeling today? I will

little bit drift away but I will promise I will come back because you mentioned lovable and I was trying to do something as dynamic templates for research as the preparation part through lovable and

And I did a very clumsy first thing with, I don't know, two or three iterations. And I already knew like, this is not good, but I wanted to get some feedback. Because for me, it's exactly about the like, how do I prepare for the research so people understand that I'm not implying my rationalization, my biases, my hypothesis, and I'm not trying to validate something

it to be nice or whatever. And it turned out to be really not good, but this is the level of where we should be talking about, like, how do we create it in a way? And that's, for example, for the designers when we were talking about, like,

like, okay, your audience are mostly UX designers, designers, product designers. So how do I actually make sure if I want to run research that I am aware of my biases because you are unaware of your biases. It's the typical shit in, shit out, you know? You can prepare really well, but if you are not aware and not

cautious about biases you are possibly putting into it, of course they will come out. And then the tool, whatever it is, will probably ask really superficial questions that are not diving enough deep to understand the people and to get the answers for. So this is where I always go back with your questions into the preparation plug, like how do we frame it? How do we prompt it? How do we prepare it in such a way that we can allow AI to conduct it?

I don't know if it's going to be automated per se in the next years.

I think it's going to be more into the direction of augmentation. Like I will, you know, check on it. I will twist it. I will iterate on it while it's live going. For example, there is something called dynamic recruitment where you still prepare as the recruitment goes, you fine tune the recruitment and you add more questions into the screener so you get better respondents. This is probably going to happen and it probably also should happen, not because of capitalistic reasons, but probably of augmentation.

I more on a philosophical level because we all have identities that we want to imply into the solutions that AI is giving us. And you did also mention the automation of recruitment, which I inside of me always clapping for like finally this biggest pain of what we used to hate. I mean, I don't know how about you, but I personally used to hate this every time I work in companies, we all hate to do the recruitment. We just need to do it now. Like, how do we do it now?

And you still have to wait for two, three weeks sometimes to just get the right people. Because now it's possible to become speedier at this part. I'm very happy about AI stepping in. This is what I really hope it gets automated as possible because one of the thing is Teresa Torres is always mentioning it like one of the first thing that you should be doing when you want to speak to customers, automating recruitment. So kudos to that. Secondly,

You know, doing this dynamic recruitment where I just give you a notch to AI. You know, this respondents are cool, but I want to have more specific of these of these people. Can you tweak a little bit more the screener? So it gives me, for example, questions that I can add and then I can decide which questions I want to add that bring me to the respondents.

And it's just like stepping in and it's not even this very boring or all I will control what the AI is doing, but turning it into better and more fine tuned thing. And this is the point where we don't talk about, for example, even in recruitment or even within the preparation, it's not about how fast the AI can bring us to the goal, but it can really help us to go with much, much better quality of the things. True. I mean, even...

as designer today, I definitely see this part being helping us too, because like we can sometimes stretch our design decisions through different scenarios, through different edge cases, through different like thoughts we didn't have before we, because we were limited to whatever is in front of us, but now is just like an outsourced opportunity, outsourced brain in a way.

you expand your horizons, you expand your perspective, which is something that definitely could fall into research, into strategy, into design, into whatever coding, et cetera. I agree. It's fantastic that this gives us better perspective in a way. Just to follow up and to close this off,

on the voice AIs. Do you still feel like the AI theoretically be the one doing the research in the next years, in five years, for example? I feel yes, because I could just give it a nudge or, you know, the briefing. I could consult it with the AI, like what I want to do. And I think some of the people will get used to it. Some of the people will even don't know. You know, it's funny because we live in the bubbles.

And we think, for example, in our business that everybody's dealing with AI. I had recently a respondent, okay, he was 60 plus, but my mom is 60 plus and she's super technologically set up. She's the queen. But he was like, I can't see you. It would be nice if we had this conversation that I would see you. And then I realized he didn't open the browser so he could see me. It was down below and he didn't realize. So

It's the type of people that I speak to. If I'm really working on a tech project and I say this is a system that will ask a question, probably some people will even find it cool. With some other people, they will be struggling and they would want to talk to real people. And it maybe will become to a PR thing like, oh, we actually sent live researchers to talk to you.

you know, where I stretch. Just like it already happens with the support, right? You have this pretty much every year website has some automated widget AI that answers the question. But if you want, pay for the premium and the live person will join and troubleshoot your problem. So it feels similar in there. Yeah, makes sense. Curious if even what you were mentioning about the ethics and the

it's been a big part of the conversation today like moving forward is it possible that the ai sound or personalization and the way you prompt ai to do the research will become so maybe smart maybe some people don't even realize that that's the ai and maybe it's so reactive to what you're saying that you feel like you're interacting with a human especially if it's a phone or

whatever digital conversation. So it's interesting if, you know, ethically, just like almost like with deepfake that today when you see the content, you might even not realize that you're looking at the person that is not saying this, but you just augmented it with AI that makes it look like somebody else is speaking those words that makes no sense or whatever. But yeah, exactly. Like we don't know what's going to happen there. It's a very interesting sort of combination of factors that we are experiencing today that might go wrong or might go right. And

Just to add on it, it's maybe more emphasizing that the research is a skill than even now, probably my community research community will probably kill me. But I think research is really leaning more towards to research as a skill than a vocation. And, you know, I could go for hours on how researchers role is changing.

But this skill is, for example, especially for designers who are our closest buddies or even PMs, is giving this opportunity to really, oh, I don't have time to do it, but I will prompt something well to do it well. And it gives me something and I can play around with it and I can fine tune it. My iterations are going faster because I was talking about the quality, but the world is telling us we are slow and research is a slow discipline.

We need to kind of adjust or come forward to the fastness of it all. Absolutely. Yeah, that's very true. I think that's one of the classic answers that clients typically would give us, right? Oh, it takes time. We don't have time. We just need it now. Obviously, this one, one of those defense mechanisms from any business that is like, we are trying to pitch, let's do the research. And this is typically the answer we would receive. But yeah, indeed, like with AI, I wonder if we could get it to the different level.

We could make it so integrated that it doesn't feel like we need time for the research at all. Like it's just a part of everyday decision making process, everyday decision making tool set in a way. We can add it in multiple steps in research and we can even do this thing of like, oh, I give an input and I get an output. We have to think about it also as designers.

And I think this is going to be as many people can relate is this thing of if you are in your human body as a part of an interview, you have much stronger emotions about the interview and you tend to be also much more attentive to interviews which you attend personally. You empathize with the people.

And empathization experience is not happening even through research readouts, presentation, or even videos, podcasts. I tried everything, trust me. It's happening through being there, having the experience. That experience is untransferable.

So we still need to be talking to real people. We still need to make effort at least from one research study to see one or three or five people to talk to them to actually have this kind of physical reaction in form of emotions, because that is actually something that stucks with us. And that is something which

Again, going to Teresa Torres, I don't know why I'm mentioning her so much, but she's great. So why not? She calls it the magical moment that gets lost if we do it completely outsourced to AI. Yeah, I absolutely agree here. The same, exactly. Even in design, like when you are collaborating with your stakeholders and partners, you cannot compare the

online whiteboarding session to the physically presented like whiteboarding session. When you're doing together in the same room, whiteboard, everybody's, I don't know, screaming, shouting, waving hands, having ideas, interrupting each other. That's where the magic usually happens. While online you can have it, you have some sort of ethics of online session. You unmute, you mute, there's technical differences. There's always like distractions.

we are affected by the physical a lot and that affects the results of the outputs as well. And so indeed, like it makes me think of the empathy piece we're having and that online it gets a little bit blurred. It's still there better than AI at the moment, but it's still something to consider for sure. So thank you for bringing this up. What I'm thinking of as we're having this conversation and actually thinking of through the cycle of the GAN research, right? We talked a little bit about the preparation and the recruitment and the running of the interviews.

And then the other part was the analysis. This is what Ioanna mentioned, one of the earliest episodes in the podcast, as well as you were referred to it already. I would love to hear your thoughts on the analysis part when it comes to using AI in the research, especially the trends that it goes into. For example, I wonder whether will there be a space where the AI will be so smart that it could predict that

that we don't even have to do the qualitative research anymore. It will have to give us trends and we'll be able to just predict it from the past experiences or something. So I'm very curious whether those analysis translate into trends, translate into knowledge that we can solid use. I don't think it's too ambitious. I'm just thinking if you are asking about the now, and I will try to structure myself well so it's

so it's actually understandable. The first thing is AI can analyze and even synthesize very well what was said. And it somehow even gets the emotions. But if we don't put an AI on it, which reads emotions out of, I don't know, the mimic or the gesticulation of

of the person, it struggles because it's sometimes the thing that is unset that you are leaning towards in your research. Or it can really tell you contradictions and people are contradictive. A very good example is: "Oh, I want to really have multi-banking and I'm trying to use it now, but I don't give you my data." So it's like:

you know, you don't know how to react on such a contradiction of a person and it's really repeating itself. And the second point is what I really feel like we should be considering more is the GDPR, which is like the four letters that everybody hates. But you are

giving away personal data and to anonymize personal data. It's really hard. If I'm trying to look from a different perspective of what I'm giving to any kind of tool or even to repositories to go through AI again, or to analyze the video again, are just the parts. So it doesn't have the connections. That's the way how I do it now, because I didn't figure it out

in a better way. But the real ethical question is there how much data are we getting? What kind of data and what are the connections between the data? Because the connections, those type of connections, AI can do like, oh, this is Peter. Peter told us this. That's also we don't understand how much and I know scientists even don't understand it much anymore, how it connects data because what it does well now, it analyzed perfectly one interview. It

perfectly analyze another set of interview. But if you do this thing of like, oh, let's analyze these five interviews together now and let's try to find those common ground or the highlight. It does the job pretty badly for now, but I think it will be a

But we will need to figure out how the input of AI is not going to be just on a verbal level. And we now have language models. So of course it's analyzing language model. But it will be needed to analyze also the emotional, the physical level,

and then we will maybe be able to have the most constructive analysis and synthesis. Which brings me to probably the last point of the research process is how do we interpret that data then? And that's where I feel like maybe the problem where you mentioned, like, let's imagine five interviews analysis might not be as good in terms of quality. And maybe this is where the problem, like the interpretation

part of it is where you could see some problems that it mismatches what you would hope to get out of this synthesis. Because I'm thinking, as you were speaking, that when the AI is analyzing different sources of data, sometimes conflicting, sometimes different, there's, okay, let's imagine there are

hundreds of themes derived from these interviews. Now, as AI, you might struggle to understand what are the priorities, how important this versus this, like what are the weights given to those different sources of input or just like themes and which ones should I prioritize and distill it to? Because when I usually try to kind of analyze some research or analyze some sort of data, I typically get sometimes general answers like make it more usable, make it more user-friendly.

obviously that's the part of the job like yeah right so you sometimes see those points which is obvious we don't have to say like what's the depth of it what's the important bit here that's why i see like ai struggling to really prioritize well maybe you just put a really good point onto it because the analysis is on that you can iterate pretty well like sometimes i even fight with my go-to is gpt always and like what

the heck are you doing here? It's just like, no, this is not an analysis. It tries to interpret even if I'm not asking. So for example, prompting him like, don't try to interpret is really important. And sometimes I'm even using the thing like, don't try to be superficial about it.

don't jump to conclusions because I want an analysis like what is happening but as it's a language model some people will tell me the same thing with very different words and it just can't understand so that's why the analysis part although it's the most natural for people oh of course I have

have data I will analyze it with AI and I will put it there is the most natural go to but I would say that's the least part I would go to and use it in those phases before much more than in the analysis itself and not because I'm a researcher and love coding I love more the interpretation part and like how are we going to implement it the analysis and

synthesis and asking for some solution is the worst part that I relive now with AI. Yeah, I understand what you're saying here. And it does partly answer my question. Do you think that AI will get there? Maybe it will, but at the moment, it doesn't seem like it's getting there in terms of like having such a smart, like

analysis and thinking of trends. So basically taking next level from analysis, interpretation to trends to prediction. It sounds like it's too early to even imagine this right now. It is. It's fitted with the available data out there. And that's the GDPR problem and its downfall as well. It's this, of course, you are not giving it the data how you are going to solve your company's problems because everybody's trying to solve it in

in their own ways and address the specific problems with specific solutions. The superficiality of the solution it's giving is just enormous. And this is actually good news. We still have to be creative. We still have to come with our intuition. For example, my best cooperation with designers, no matter in which roles I am, is with designers that still rely on their intuition and not on art.

data skeptic in a way because we have so much data you know and you can abstract the data pretty well ish with some iteration from the AI but it's about knowing oh okay but I know and I feel and I have this experience from my craft that I'm doing for some years that this could work

way better than what AI is telling me. So yeah, we will see how this goes because I feel like there are still a lot of talks today on this trend part, like why we even need research moving forward. But you made a very good point about like it's only as smart as the data we're given it. And if we're not given the data, new data, new up to date data, new concepts, new thoughts, new whatever, like we are living in a very dynamic world as we have established. And if we are just limiting it to what it already has,

then the predictions will be only as good as the data we gave to the DEI. And also, I think what you also said about the GDPRs and possibly moving forward, there will be more and more data protection mechanisms. I'm not sure how the political situation will be shifting, but again, moving forward, I feel like there will be restrictions of that and there will be ways to impose on how we use data. How do we protect data from machines? Anyways, but the point is we cannot just

fully outsource it, like here's a bunch of data and like now be predictive. And it's for example, what I see when I'm trying to analyze something, how much data you are giving it. If it's a small user testing, of course it can chew on it. If it's more, if it's 40 interviews, it really has a problem. Don't try to analyze surveys because it can't count. It's just unbelievably bad at mathematics for now. I am the worst in mathematics in the whole world.

but even I could see like "oh wait, this is just a really bad result" and I'm sometimes talking to it like "oh okay, is this bad? did you really deliver me this bullshit?" and it's like "oh yeah sorry, I did" I was like *laughs* so the typical thing of like, you know, explaining me what

what kind of mistake it did that I already saw. So it's fun. Wow. Give me an example. I'm very curious because I feel like I don't think there's a big notion of AI being bad at math. I would imagine many people use it for math. Like, oh, I have this challenge. Can you kind of give me different ways of solving it?

And so I'm curious what issues, maybe an example or something that you thought of. I mean, if you give it an Excel sheet just to count numbers, it makes a lot of mistakes or I don't know that when I was trying it, or if you are trying to combine some

data like okay I have this indicator of I don't know SUSE score or system usability score how would you connect it to this score it's not doing a good job no way and it's it's really like sometimes it's the basic math I think we we anticipate a computer system to be good at math because it's built on math

math. But we are talking to a computer which is built on language models. That's why it's called language model. It can do well with languages. The pace how it adapted to different languages across the world is unbelievable, right? But mathematics and facts, the hallucinations, it's still where it lacks the most. I see that point. And it

It's a very good reminder of the fact that we need to remember how this model has been built. Indeed, something that we can easily fall into the trap of like feeling, oh, it's smart. So it's smart and everything. So I'm going to just use it as my brain now and forget about my brain.

It's super getting smarter. It's also tricking us. Like the moment I started to talk with it, like I'm talking to another human and started to brainstorm, not just prompting it, started to think about stuff together. I have a relationship to chat GPT for now, right? Which is a little bit scary, but I have, it will be the first thing I would be outsourcing, like to count my taxes or whatever, but I counted way better for that.

Yeah, that's true. That's true. Just to close off this topic of interpretation and just, I don't know, like an analysis, it sounded like the statistical part of it, right? The way it looks into data is just not there yet. Okay. Just to conclude our conversation, here's the last question for you. Based on all the things we have discussed today, right? So

preparation, recruitment, running, conducting, analysis, interpretation, moving forward, possibly transfer, whatever. What do you think AI is really good at right now, what we should kind of safely use it for and what we should be aware of moving forward, right? As designers especially, because we are maybe not always like in the weeds of research. We're not always up to date with everything that's going on in research. We're just using it as a tool set for us mostly.

But you as somebody who's very deep in the research and pretty much every day uses this, what do you think we should safely rely on when it comes to AI, when it comes to research? But also what are the things we should be very careful? Like what are the weakest parts of the research we should never at any cost use for research today? I would say I would go to what I'm trying to preach. And it might be that in a healthy year, we talk somewhere together and I tell you something different because

Things are changing so very fast, but I would say take it as a great preparation tool. Definitely try to speak to it like you would speak to a researcher or a research thinking coach, guide, whatever, where you are trying to develop a critical

research, meaning, oh, what do you think? Is this question biased? Oh, what bias am I applying to this? Are there any answers I can expect from this? What do you think? So this is great for the preparation. It's great for surveying when you are using it in such a critical way as well. If I'm talking about methods, I will be very cautious in the analysis part of everything possible. It doesn't matter if it's interviews or surveys, outcomes. That's where

the most critical and the most brainy we have to become. And I would say having it as a brainstorming buddy for designers is great, but I would even vote for, you know, this is why I am in this world because of design. I started with the thought of like, I want to be the design manager. I want to help designers to do their jobs.

We are as designers creative, intuitive and still systematic. So going back to our nature when it comes to the interpretations and some solutions would be probably very cliche level advice but I will tell you. No, I think it's definitely what many people are mentioning today. So it aligns with

how people feel. I feel like the creativity part of it, the intuition part of it, yeah, leaning into empathy, et cetera, these are the things that are hard to automate today. And we can outsource parts of it. We can stretch, we can brainstorm, we can employ other perspectives, but definitely the creativity, intuition, empathy is a human skills that are not there yet at the AI level, which is good for us.

Well, I guess I would leave it at there. Anything else you wanted to mention before we wrap it up? Anything that you feel like was important to mention in this conversation, but we haven't get there? Just one small, it's not a small thing, but just one maybe nudge or inspiration because I don't see it happening for my clients as much as I would love to.

It's not researching with AI, but for AI. And really as a designer also, it doesn't matter if you're in UX design or product design, trying to figure out how your users are actually accepting and approaching AI, because we don't have yet enough data on that. And it might be very specific for the companies you are working for. So grabbing your minds around like, okay, how is actually the acceptance of our users, customers?

might be a great thing to do just to get a glimpse of where you should be heading in this trend as your company or as your product. So in other words, should you even have AI in your product? Yes, because it's the typical thing like, oh, this is a technology, let's use it. And I tend to do it as well. So yeah. Yeah, I know what you mean. If Ioana would be here, she would scream, yes!

She would always advocate for that as well. So I totally hear you. Thank you so much for this great closing note. Alrighty. Thank you everyone for listening our conversation through. If you have any questions, please let us know. You will find the box under this episode. Also, you can find other related episodes in the show notes as well as just scan through the feed. You'll find more related episodes. This topic is obviously very heated these days. And if you enjoyed this episode, please let us know through the rating on any podcast platform.

platform of your choice. We really appreciate your support here. And thank you. Thank you. Have a good day, everyone. Bye bye.