We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode #138 Live Maven Session: "Transition to an AI Design Role: Skills + Strategy"

#138 Live Maven Session: "Transition to an AI Design Role: Skills + Strategy"

2025/6/12
logo of podcast Honest UX Talks

Honest UX Talks

AI Chapters Transcript
Chapters
AI is changing design by introducing probabilistic systems, adaptive interfaces, and a focus on human-machine collaboration. This shift requires designers to embrace ambiguity and design for uncertainty, while still prioritizing human needs and ethical considerations.
  • AI is the third UI paradigm, reversing the locus of control.
  • AI design requires working with emerging tech and embracing ambiguity.
  • AI designers are essentially regular designers using AI as a material.
  • The core skills of a designer remain crucial in the age of AI.

Shownotes Transcript

And then another thing is accommodating agentic behavior. We've seen it all over the internet. Everybody's talking about AI agents versus agentic behavior, agentic AI. So the thing is that, yes, AI will perform narrow tasks and there will be smarter AIs governing these smaller AIs with narrow tasks.

But this is a very, let's say, unpredictable and interesting behavior because agents are autonomous in theory. And so they will make decisions on their own. How do you design to accommodate this, let's say, autonomy, right? Agency that agents have. Hello, designers, and welcome on a new episode of Honest UX Talks.

This is a one-of-a-kind edition where we're trying something different. Last week, I delivered a live talk on Maven, a learning platform that I might join as a teacher this autumn, we'll see. And I gave a talk that's called Transition to AI Skills and Strategy to Find a Job or Move Your Design Role into the AI Space. And this was a very well-received talk and

It was a pleasure to build it, deliver it. I'm trying to update it based on all the things that are happening in the market, vibe coding emerging or exploding the new tools. And so everything is changing very quickly. So I'm always evolving this, let's say career advice for how to find a

job in the AI space or how to evolve as a designer moving closer to the AI technological revolution that we're all a part of now. So I realized that, hey, I'm giving this talk to the community of Maven, but what about my listeners from Honest UX Talks, which are, they're always there, they want to learn why am I not making this available to them as well. So here it is. This is our attempt to bring you closer to all the interesting things we're doing and give you an, let's say,

or short bootcamp into how to move closer to the AI space as a designer. So if it's going to feel slightly off at some point, it's because this was a live session. We recorded it live last week and it might feel a bit unnatural for the podcast format at different moments or maybe not. I will also share the slides to the talk in the show notes. So you will find them there if you want to move along with the

conversation as you listen to me move through the slides to help them help the messages make more sense or have a visual for them. But yeah, I hope you all enjoy this. I hope it's as useful as it was in the live version. And let's see how this experiment goes. Let us know in the comments if it was coherent and valuable.

for you and yes submit any new topics for the future before we share the live recording from last week I just want to take a moment to thank our sponsor week studio for being such an amazing sponsor and as some of you active listeners might know I'm on the

building my agency website in week studio and sometimes I want to do very special things or very sophisticated things because we are the agency of the future we're building for 2050 and so the website needs to be very very professional and edgy and uh

like just state-of-the-art. When I have ideas that I don't really know how to bring to life, I discovered that Wix has this academy, Wix Studio Academy, and it's just filled with tutorials that are very easy to follow, very easy to unpack.

They have tutorials on how to create animations and interactions. And for me, that's extremely helpful right now because I want to make my website shine as much as possible. They have tutorials for creating a half sticky, half scrolling design. And they're short and very to the point. There is not a lot of going around in circles and so on. So they're very easy to implement quickly. You don't have to spend a lot of time.

lot of time to get it and yeah I'm just happy that they have this resource available to help you make your design state-of-the-art very proficient agency level so thank you Wix Studio and now let's go to our live session from Maven and enjoy it hi everyone I think we're close to the announced hour to begin this I'm still gonna wait a couple more minutes as you've seen I'm testing my

But screen share and everything. I don't know. I was on a short haul. Short. I was on a 10-day holiday in Canada recently. I also gave a talk there at Interface Quebec. And coming back with the jet lag, like I just landed yesterday, I feel so much confusion. I hope this session goes well.

Because I feel like I'm just in survival mode right now. So bear with me if I have trouble sharing my screen and getting the technicalities right. But the content will be there because I'm very excited to talk about all the things that we have on our agenda today.

So the question, the context that we're here for is this question of is AI design a thing? And are we all becoming AI designers or just some of us are becoming AI designers? What's going on with this role? Is it really a role? We are being pulled into AI conversations, be it on social media or in our jobs and the roles we have, the things we're doing in our own personal workflows we're experimenting with.

smarter workflows, trying to support our thinking and our even creativity with the help of AI. So we're all running some sorts of experiments in one part of our role, design role or another. But design is currently also expanding its paradigm. Jacob Nielsen talked about AI being the third UI paradigm, which completely reverses the locus of control. We would instruct interfaces what to do, but

But now we just give them goals and they transform those goals into an output independently, autonomously. So it's a different way of interacting with systems and computers. So it's expanding. The design field is expanding to include new things, behaviors, prompting, adaptable systems and unpredictable systems and so on.

So in this context, what is an AI designer? Is it something that we all are? Or is it just we have the mindset of an AI designer? This is what we will be exploring moving on. When it comes to outcomes and expectations from this session, I just want to make sure that everyone is aware that we have a limited time. I think we could stay a little bit after to respond to Q&As, have conversations. I care about that a lot because I'm wondering what's on everyone's chest and mind and what

where's the, let's say, biggest playing point when it comes to understanding what's going on in the AI space right now. But the outcomes or the goals I have for this session is to help everyone understand what AI design really means and how it's different from conventional design, if it's different. What are the new challenges when we're designing AI interactions? And I'll share my story at Miro, where I was hired as the first AI designer. And then

we're going to briefly go through some principles when you're designing for AI and then the process of designing with AI and how to stay relevant in this new age of AI stuff going on in the market. And when it comes to hiring signals, we will be looking at some of the job posts and new roles that are emerging, especially within large corporate tech. Many companies are hiring product designers for AI or AI designers, or sometimes it's just implicit that you'll be working on AI.

But we will go through that later. So are we all becoming AI designers? And should we all become AI designers? So is AI designer even a real role? At this moment, it feels like it is. Some companies hire for this exact title. Some people brand themselves like that on LinkedIn. I also applied to a mural role that was called AI product designer. So right now, yes, this term exists. It's being talked about.

I don't think it's completely accurate in the sense that an AI designer would design AI, meaning that it will design the technology, the models, the, let's say, technical parts, the system of how that AI model or language works. So we're not technically designing AI, but we are designing for AI interactions or interactions that are AI enabled. And we are also designing with AI. We'll go deeper into that.

But something to bear in mind is that my bet, and this is more of a personal take, but I feel like it's quite representative for what I'm noticing in the market, is that it's a transitional term. So this is a term that came as a need for companies to distinguish between people who have some AI experience

if they want to hire someone who has prior AI experience, they're signaling the job as we're looking for an AI designer. We're looking for someone with some AI background, some AI knowledge. So we're looking for product designers in AI, right? So it's a label that's born from this need to differentiate between people who have worked on this in this problem space or not yet.

But in a few years, every designer will have touched in some way AI. We have already touched it, all of us, right? Even if it's just in our own workflow. But in the future, we will be working on the products that will use AI technologies in one way or another. So essentially, we will all become AI designers. So the need for this term is

might not be there in the future because AI designers are just regular designers that are using AI as a material so it's very similar to saying I'm a product designer that's working on a I don't know what industry right on the fintech industry it's very similar to this kind of mindset when you think about product designers working in an industry this is another type of problem space

course, it's also a new paradigm, so it's a bit different, but you can compare it, right? So it's having knowledge in a certain domain, and in this case the domain is AI. But what all people who are working in the AI space right now have in common is this capacity to work with emerging tech, but by embracing ambiguity. So this is one of the things that's like

like the highest, let's say, descriptor of the AI role. You're working in ambiguous fields and you operate with these concepts, new concepts of earning user trust or designing for uncertainty, this invisible logic that's different from conventional things that we decide, design, and then implement in a very certain deterministic path, right? But

These are new materials, but the craft is old. So we will move forward in this conversation looking at some job posts. And you'll notice that big companies still hire design talent in the conventional description. So what they require is still a person who has very good design expertise. But sometimes they will add notes such as has experience prompting or designing for, I don't know, there are some fine tune, fine, fine lines.

that are adding up to that description. But essentially, it's just the old craft mapped to a new set of problems. And in that way, we're all prepared for this, right? We're inherently prepared as designers to operate in any space with the frameworks and thinking and mindset that we had in the past.

Okay, so if the craft is old, how is AI different? If we're doing the same thing that we used to do as designers, why is this called differently? Why is this new? How is this new? And if you compare traditional design to AI design, there are a couple of big fundamental differences that kind of impact the way we need to adapt our thinking to solve for these problems.

First one is determinism versus probabilism. So deterministic versus probabilistic. In deterministic systems, the behavior of the system is very predictable because we design it, so we decide it. With non-deterministic systems, AI systems,

we don't know what's going to happen, right? It's just unpredictable. It's non-deterministic. It's a probability. We can design for that probability, but other things can happen. So the way we design should accommodate any type of response, outcome, the, let's say, infinite possibilities that AI can come up with when resulting in an action or an answer and so on. So traditional design is clear. It's predictable. The flows are as designed.

But in AI design, the behavior of the interface and the way we service it to users should be adaptive. Adaptive to what the system outputs, but also to how the user responds to that output. So many AI systems are currently conversational. You can think about claw GPT LLMs in general. They have a conversational interface.

in which we take turn with AI to have a conversation and that conversation could go anywhere. So the design should be flexible enough and in the future and even right now, this adaptive behavior includes multi-modality, right? So it adapts to moving from creating an image to then talking about that image, turning it into a video, maybe transforming it into a document and so on. So you move from one format to another and that has to be a very fluid experience.

In traditional design, the logic is known. AI design challenges us with black box behavior, which is what all designers kind of, we can't predict what's going on because the behavior is not explicit even to us as designers, not to mention to the user.

When you're testing traditional design, you're testing for correctness and for usability, let's say. But in AI design, you're testing for the competence of the system, the reliability of the system, the repeatability of the answer. So now testing is not only more difficult, but much more complex. So you don't just have to have a proper design experience to

But the technical performance needs to be there as well in order to have a proper satisfactory result as a user when you're performing a task, right? So if the design is very well designed and thought through, but the technical output is crappy, then you're going to have a disappointing experience. And the only way to test this

for the quality of an experience in the age of AI is to build it, unfortunately. So that's why we're seeing a lot of experiments in the space happening live is because there's no other way of knowing how this will end up in the real world because you have to see technical and design coming together in the wild.

In traditional design, we're talking about user first, but now it's more this combination, hybrid collaboration between user system that are co-shaping the experience, right? So if you want to sum this up, as designers, we always design for humans, but now we're designing for this combination of humans, machines, and uncertainty, ambiguity. So these are the main differences of moving from human only to human with machine and all this element of ambiguity.

Talking a lot and too fast, and I don't know how to breathe. Okay, so what qualifies as an AI designer in this context? You can think about it from three main perspectives, and then judge if you can name yourself an AI designer or what's missing from you being able to call yourself an AI designer.

The first one is that you're designing AI features or you're working on a product that has some sort of AI surface, AI component, AI enabled solution, and so on. So you're in a way touching products that have AI in them, right? Copilot flows, intelligent assistance, chatbots is one example, voice assistance.

Maybe you're working on some other AI powered aspects very specific in your product. This is one component of the AI designer role as it stands today. The second one is you use AI in your own workflow. And I think this is the more popular, let's say pillar. Many people, yes, they use AI in their workflow already.

They use it for research, maybe ideation, brainstorming and so on. And then the third component is you are now playing with new materials and new kind of fields, let's say, new problem spaces.

You think about data, you now think about ethics in the context of machines and the way we build these machines. You now deal with unpredictability. So you have some sort of new space, new playing field that wasn't there in your role before. Like data is a very good example and we will go through what it means to work with data science or ML engineers. And the reality is, and this is a note that I care about, is that most AI designers today are winging it.

And that's okay. There are no clear recipes for the role. Nobody really knows what's going on. I've been talking to large companies from Miro to Atlassian and so on and so forth. Everybody feels like they're going in the dark. So this role doesn't yet have clarity. It doesn't yet say, this is what you're supposed to do in this role. And when we will be looking at job posts, you will see that there's a lot of variety and, um,

Yeah, there is no single definition. And we weren't able to find a single definition for what a UI designer or UX designer means in our history as designers. So this new ambiguous space is definitely going to take a while to disambiguate and clarify. But if we were to break it down on two main things, to be an AI designer or whatever we want to describe this to be is the combination of two main things.

you're designing for AI and you're designing with AI. And now let's see what these two parts can mean together. And I'm going to start with a real world challenge, which was my experience at Miro. As some of you might know, I was the first AI design hire at Miro. I worked there for a year before deciding to open my own consultancy studio. And

It was a very open-ended space, especially due to the nature of Miro as well, which is canvas. It's infinite real estate. It's the world of possibilities. You can do anything with AI, right? And so much open space.

And so the open-endedness made it very hard to articulate a clear vision and a clear path of experiments. What we resorted to doing is labeling possibilities by verbiage, so sharing the vision in verbs.

which meant either AI can transform, AI can evolve, AI can edit, AI can enhance, AI can translate, and so on. So we were relying a lot on verbs to understand what are the possibilities of what AI can do, and then how do we surface those possibilities in the product without making it a disruptive experience. Because the problem with Miro was that

It was a very loved product. Everybody loved it. So whenever I went to talk at conferences or whatnot, people came up to me to tell me, I love Miro. It's such a great product. Let me tell you how I decorated my baby room using Miro and so on and so forth. So it was a product that worked.

And people loved it. So this is, I think, one of the most, the hardest challenges as an AI designer is to come in a product that's loved and try to not break it with AI or make it flashy or make it like sparkles everywhere just to promote that they have AI without bringing real value. So we resorted to verbiage, transform, adapt, generate, create, and so on.

with the help of AI. And this is how we built, let's say, the pillar of our strategy. And within these verbs, we came up with a set of experiments that had corresponding questions. So we couldn't know which was the North Star. We couldn't tell where we were going to get with AI. What will AI be like at Miro in five years? We just...

had to articulate a set of questions one step after the other in order to try to answer the big bets, understand, disambiguate some of the biggest possibilities. And so we essentially had a roadmap of questions that translated into experiments in the product.

Because, as I was saying, we had to build them in order to test. We had to build a chat interface, a side panel in Miro in order to understand that it's not the ideal way to surface AI, that a more elegant way would be contextual, invisible, not even stated as AI, but more of a subtle experience of AI. And so we censored the chat because it didn't feel like the right paradigm in Miro.

where you come to be creative and just get all over the place. And so you don't want to be disrupted, moved into a new panel surface and then do your thing there and come back. And so it didn't feel like the right, like a fluid experience. And then another very important thing, or let's say challenge that you have when you're designing for AI is this concept of metrics. What are we incentivizing as a team? I ended up spending a lot of time just asking

Would it be really good if people would use more AI in Miro? Is it what we want? I mean, if people ended up using more and more AI, is it a right sign for Miro? Or should Miro still be a product where you get creative, you work in a fuzzy, messy space with all sorts of formats and ideas, and you just plunk them on a board and play around with them, transform them and so on? If the AI usage would go up

Let's say that's the metric, more people using AI. Is that a good thing to incentivize? Is that a good measure of a product success? So in the age of AI, what do we want from AI is a question that product designers working in this space struggle with. You don't necessarily want more usage.

You don't necessarily want more people prompting or a higher number of prompts. So how do you figure out which is the right way to just measure this experience, evaluate the quality of an AI experience once you've built it?

And then that's where new elements came in. Ethics was always part of our job as designers, but now we also have this concept of meaningfulness, which was also there before, but in the age of AI, when we're seeing so many meaningless experiences thrown all over the place, we have to get back to making sure that we're putting things there that have value. So an example question: should people summarize sticky notes?

Isn't the point of a sticky note to read it? Because it's already a summary. It's already a small, short insight. Do we want AI to come in and transform a number of stickies into some statistical insights? Or do we want people to read through the stickies one by one and reflect on them and then have ideas?

based on each of the insight that's in there. And so another challenge that I had at Miro was aligning AI with the rest of the product as it grew. So I wasn't working in a vacuum. AI should integrate with the rest of the product, with the rest of the ecosystem. And that ecosystem was evolving very quickly. We were launching docs. We were launching a lot of ways to rearrange, restructure your boards and so on. So how do you keep the AI experience as an AI team working in a large company?

how do you keep it updated with the rest of the product evolution? So it's closer than, because AI is a horizontal capability, it should serve for many use cases that happen in the product. And so when you have distributed teams, right, working on different problem spaces, how do you make sure that AI permeates those spaces? And how do you make sure that what you're designing is supporting the use cases of those problem spaces and product teams?

So in the course that I'm planning on launching with Maven this autumn, I want to invite guests from large tech companies to share their specific challenges, like people from Atlassian, from Canva, from Figma. What is their specific... Because I feel that this may speak to many people working in the industry, my own experience, but also I feel that it's very specific to your particular company and problem space. But these are not unique, but they're also in a way specific. So what...

are different struggles that people working in the space have would be an important part of this course that I plan on launching. Other challenges in AI design, this one I already mentioned. Designing for probability, which essentially means you're designing for a non-deterministic... We're awful on time. Just by the way, I hope you can all stay beyond the 30 minutes, because...

I miscalculated how deep I can go on each of these slides. Designing for probability, non-deterministic behavior. And then another thing is accommodating agentic behavior. We've seen it all over the internet. Everybody's talking about AI agents versus agentic behavior, agentic AI. So the thing is that, yes, AI will perform narrow tasks and there will be smarter AIs governing these smaller AIs with narrow tasks.

But this is a very, let's say, unpredictable and interesting behavior because agents are autonomous in theory. And so they will make decisions on their own. How do you design to accommodate this, let's say, autonomy, right? Agency, right?

agents have human AI collaboration is this role fluidity like now the human is giving input the AI uses that to inform its thinking and the next steps and so on so it's a very let's say it should be a fluid experience and how do you facilitate for that at the same time keeping into mind that most of the AI experiences will move into multi-modality right so

In chat, GPT is the best example. You text, you can transcribe by using voice, you can talk to it, you can upload images and so on, and different kinds of PDFs and so on. So the way we conversate with these systems will involve different types of formats and sources moving forward, and it already does.

And then how do you build systems that offer trust and transparency and expose their rationale? They appeal to demonstrated thinking, so they use these mechanics of showing how they achieved a certain result. And then also, how do we deal with bias that might come up?

up in these systems and the data they were trained on and so on. So these are some of the most common challenges that we can face in AI design. Some of the principles that we can keep in mind when we are designing for AI interactions

So start with the people. Of course, it's what we're supposed to do as designers, but in the age of AI, many people are technology enthusiasts. So they just go along with whatever technology can now do and build it because it's possible, but we still have to put the people at the center of this human-AI collaboration in the driver's seat. We may have to make sure we're enhancing humans, not replacing the parts that are meaningfully attached to human nature. Reinforcing control of

building for everyone, so making sure we're not deepening gaps in society by making the powerful more powerful with access to prime technology. So chat GPT needs to be accessible, affordable. Everyone should be able to use an LLM in order to remain competitive and not deepen that gap between those who have access to resources and the underprivileged.

Trust and transparency, exposing what goes on under the hood, showing accountability. So building systems in a way where they degrade gracefully and they apologize when they're wrong. And so the user feels respected when they're interacting with these AI systems. Mutual growth mean if the system is built in such a way that it educates me on how to make the most out of it, then it will learn more from me. So it's this, we're teaching each other how to make the most out of this collaboration.

Fighting to clear bias essentially means feeding representative data into this system, so making sure that we clean up the data, we feed it a mirror of what we want us to reflect in this technology. And promoting good, like just asking all the time, should we build this? Could this be solved in a conventional way? Does it really have to be AI? Can it harm someone in any way? How can we make sure it doesn't happen? Why did

comes to the second part, which is designing with AI, this is an example process that is augmented with AI. But the way you should look at the slide is that it could never happen on its own. So this is a slide that's governed by a person. There is a person that says, I'm going to start with this, and then I think we should do this, and then this. And I feel that we've managed to reach

a sufficient level of confidence before we move into the next step. And if we haven't reached a sufficient level of confidence, we're going to go back and learn more. And so this is orchestrated by people. AI doesn't have the complexity and capabilities to brew in so much context.

So the system of a problem, the system around a problem is a very complex organism that AI can't and probably will not be optimized to grasp in the next couple of years. We're either seeing narrow AI, AI that solves small tasks and then more complex tasks with multiple AI agents. But I think we won't see this level of complexity that comes

comes with solving a complicated problem. And so this is something that you can start experimenting with. This is an interesting exercise I want to recommend everyone to do. Map out your own process and then try to see what are the parts where you should augment your work with AI.

where it's feasible, where I can do well, where I can't do and shouldn't do. And so just reflect on, which I think many of us have been doing intuitively anyway. But one of the more important, let's say, transitions or transformations of our role that we will be witnessing or already witnessing is this

moving from producing design to curating design. So we will not be generating UI because now there are a lot of builders, we're going to be producing life solutions instead of static prototypes and screens. And so producing design will not be what differentiates us as designers, which never was the case. There were just maybe misunderstood conversations in the industry.

of the role of a designer being to produce pixels or screens, it was never about that. But now it's going to be even less about that. So we will not be producing

many companies are building their internal text to design widget where, which is trained with on the design system of the company. And so anyone can go there and prompt, and then it produces design in the design language of the company in the design system. So at this point we're talking solving for edge cases or solving for, but for basic interactions, we will be generating them on our trained internal models, uh,

design system trained and we will generate design just by prompting or even a smarter way in the future. I don't know. But we will be curating these solutions and we will still be the ones who say, this is the vision. This is what needs to happen. This is what we're solving for. This is what we choose from these kind of options or yeah, just versions that AI has produced. Does this mean that AI will replace me? No. As I

as i was saying i can't figure out what problems to solve yet it doesn't have that intentionality that proactiveness there's no system that says how can i make the world better let me fix things for you it's not there yet nobody's optimizing for that and plus

designing a complex solution requires multidisciplinary effort so we're talking about many roles coming together psychology research motion design ui design interface information architecture we're bringing a lot of angles to these problems and ai is not optimized to have this kind of informed conversation with different perspectives that challenge each other it's stereotypical it provides let's say an average of these perspectives but it doesn't bring in nuances as

a team would. And it still needs a lot of guidance, it has trouble understanding context, it also lacks empathy. I used to hate the word empathy because I feel it was overused, everybody was talking about it to the point where it wasn't clear why it bent. But now I feel it's very important in the age of AI. It means that I can understand what this person feels. I have an emotional intuition that AI can't even simulate.

And obviously, AI can feel. And so we are well-positioned, best positioned, or I would go as far as saying uniquely positioned. We are the only ones who can understand how a person feels or how a group of people might feel and how to design an emotional experience for those people. And then AI also doesn't understand consequences, doesn't know what it doesn't know, what's trained on existing data, right? So the non-existing data or the non-existing perspectives

are missing from its thinking so there's a lot of unknown around it and so it can only design on in the ways it was trained to know things and do things if you want to prepare though for the revolution i have a couple of pieces of advice for how to start the

Start prototyping with AI-first tools. The zero lovable cursor. If you haven't built a solution with them, it's going to be fun. I recommend everyone starting with that. Integrate prompt engineering as part of your UX craft. And I think we're all doing that. We're working with GPT in some capacity or an equivalent LLM. We're prompting it. We're trying to... There was an interesting study that showed that

The more you prompt, the deeper you go in a conversation with an LLM, the weaker its responses get. So there was this misunderstanding that you have to refine your prompt and then have a bunch of conversations to go deeper into it until you get what you want. But after a point, and I've noticed it,

empirically, first hand, after a point the answers get worse, not better. But you have to be able to, I think it's an interesting experiment, to try prompt engineering and getting results from AI as good as possible by giving it the best possible context. I think that's the key. There's a saying in the AI space, garbage in, garbage out. The better the information, the context, the ask you give AI, the better the answer, obviously. Another

piece of advice is to become best friends with data scientists and ML engineers and product teams if you have that in your company start spending time with them because at Miro for example I was working very closely with the data science team to understand what's possible when I'm building an

a solution. Do we have a model that can do this? Do we need a new model? Do we need an internal model? Can we use mid-journey or can we use, I don't know, GPD, whatever, Gemini and so on. So you work with data scientists, you work with these new fields to understand the art of possible. What are the possibilities for this

Let's say I've imagined this solution. Is it feasible? And this is where these people have to be part of the conversation. And so they also can suggest possibilities that can inform the way you think about solutions. So this is an important collaboration that wasn't there before.

We were stuck in our triad with the product manager and the engineer. And now we are also including a strong, close collaboration with data scientists, machine learning people, and so on. Another thing that you can do is spend some time exploring the emerging patterns. I recommend this site, Shape of AI. But there are other places I might recommend.

So understanding what are the new design patterns that we can integrate in our work. I think it's part of the, let's say theory, theoretical knowledge, but also you can observe it in products. When we were testing at Miro, people already had, GPT was launched for like six months and people already had the mental models from ChatGPT. So they were expecting the same kind of behavior that they were used to from this product that just appeared. So there are new mental models

emerging in the space of the way we interact with computers. And there are also new patterns for interactions and it's very interesting to learn about them. What will also be helpful is to build a basic technical understanding of models. I don't know what machine learning is. What is deep learning?

What are the main models for generating images? How do they work? What is diffusion and so on? Just to have a basic technical understanding that helps you imagine solutions in a better way if you want, in a larger way. And then a very interesting exercise is to learn to evaluate AI interactions. So as I was saying, interactions are harder to measure and to judge. And you can build your own internal set of AI adapted heuristics and judge

the speed of an AI task, the value of the answer or output it gave. Essentially, you can also adapt, I think even Jacob Nielsen adapted his list of 10 heuristics to adjust, to respond to the AI changes of product design. And I think that's also somewhere on the internet, but

but you can start judging AI interactions on your own. And this will build a mental muscle for understanding what does quality mean in the age of AI? How does it change? This is what we'll be doing in the full course that I plan on launching this autumn. We will prototype, learn about prototyping, learn about prompt engineering. We will be learning about how to work

with data science machine learning teams, the patterns, technical foundations, and how to evaluate interactions. But of course, we don't have time now because we already are very bad on time. So hiring signals. This is a very interesting part of our talk today. Many companies are...

even experimenting with titles. I've been looking at these job descriptions and roles that are being posted for the past one year and a half or more, studying them closely, and you'll see that there's also a kind of variability, variety of titles that's emerging in the market, but also it's really interesting to notice that most of them require general design knowledge. Not this one in particular.

But many of them, let's say the lead product designer on the AI experience team at Canva, they want end-to-end product design experience, a collaborative approach, problem-first mindset, experience driving, right? So they're not mentioning anything particular about AI, which means that...

They're open to hiring people who haven't worked in the AI space, right? They're looking for talented people because talented people, senior people, people with expertise in design can be fluid enough, can adapt, can adjust to this new set of problems, as I was mentioning earlier. Same for Atlassian seems to want demonstrated experience designing for AI or related features. But

Apart from that, it's just the basic description of what a designer does. Very interesting that OpenAI and Anthropic don't call it AI because it's implicit. So OpenAI is just hiring product designers.

And on Tropic as well, product designers, developer experience, they don't have to say AI product designer or product designer for AI because they're implicitly AI first companies. But it's very interesting to notice the kind of titles and how the job market is evolving. Many companies are opening AI specific teams like Uber here has consumer search and AI. It's a team.

PayPal has AI personalization and they're hiring within those teams, right? Google, DeepMind hires an AI product designer. So I'm not lying when I say the title is real. This was another very interesting one, AI enabled product designer. But most of the times it's just,

a designer working in AI, a staff designer working in AI, and so on. So most of these posts require general design experience. The focus is still on conventional design expertise. If AI is mentioned, it might show up as they want proven experience designing AI-powered features.

Understanding how AI changes and under that you prove an understanding of how AI changes user behavior and interface expectations. Being familiar with prompt design, how models work, the trust dynamics, using AI in your workflows, like, I don't know, using it in with Figma plugins, generative tools,

automation scripts and so on, awareness of the limitations and when not to use AI. And also many of them mentioned this capacity to tolerate ambiguity, which I would say is the number one condition to thrive in the space.

you really have to deal with more complexity more ambiguity more unknown so ai in 2025 is very messy everybody's throwing around buzzwords everybody's talking about all the things you should learn to prepare for the revolution you should vibe code you should learn about computer use agi genetic ai multi-model so on it's a lot of noise but it's still very hard to make predictions i i

appreciated this status, this post by one person, that the only one prediction about AI in the future and design that we can trust is that AI will make good designers better and bad designers faster. And that means that it's just

And you've heard it a million times. It's just another tool in our toolkit. And it's in the way we manage to think critically, use it properly to design better, not necessarily faster, not like minimizing the effort, but really designing better, designing stronger, improved experiences. So AI may fade, AI designer may fade as a title, but the competence will remain in two or three years.

Most design roles will include some form of AI literacy, so I don't think you should do anything wild to prepare, but you do have to build these foundational pieces, right? Of experimenting with AI, learning about the technical possibilities as much as it would have been valued to learn about front-end a while back, right?

We will all be working in AI-shaped environments. And the question is, the most important question is, how intentional do we want to be about it? So do we want to just stay here, do nothing, wait for the

to form us, shape us? Or do we want to take some sort of agency action and do the parts that really have value? Not everything, because we can't do everything. We can't be up to date with everything. It's already a lot of us are dealing with AI fatigue, some

some sort of burnout. There's too much noise out there. It's overwhelming for me as well. But I do want to advocate for this intentional approach to learning the things that matter and then feeling confident in your capacity to solve problems in this new paradigm. Because this is what it's about. It's not about landing a job in AI, but it's about your internal metric, your internal feeling that I am capable

prepared to operate in this ambiguous space. And I know that I can experiment with different tools from my toolbox to solve a problem in a new way. And in my podcast, Honest UX Talks, I plan on inviting under this new series,

How to design, how to AI and design. I want to invite guests from Figma, Atlassian, Canva, Tropic, Google and have conversations also in preparation to the course I will be launching in the autumn sometime.

with Maven, but I want to invite everyone to tune in occasionally to see if there is an episode with one of the AI designers on these teams, because I think that they're going to give you a fuller perspective of what they do day to day. And that's it. Make sure to connect with me, and I hope that...

A lot of people are still here, I really appreciate it. I didn't see anyone for a long time. Okay, so a very long list of things. I don't know how to choose one question. Maybe if somebody is brave enough to ask me personally, because otherwise going through the chat will mean wasting a lot of time. So if anybody has a very pressuring question, unmute yourself and go ahead and ask it. Hi, sorry, thanks for your presentation.

And I wanted to ask that GPT just announced that they're building a gadget powered by AI and the interaction with it is entirely voice-based. It has no interface, actually. How do you think design can survive in a world like that? This is a very interesting question for multiple reasons. Also, a personal reason is that I wanted to build a similar device

I started my own startup one year ago and it was this device that was meant to serve teams in product companies in tech. So my idea was of a physical device that you put on a desk and then it could assist the team with information from the company ecosystem everywhere like

from Jira, from Notion, from whatever, Mills and so on. And I thought it was the best idea in the world. And then talking to investors, they told me, nobody wants to build physical devices anymore. Forget about it. We're not going to fund this kind of physicality of AI objects anymore.

because we've seen Rabbit fail. Fail, it's not doing very well, right? I have one at home somewhere around here. I never use it because I couldn't find a use case for it and I really tried. We've obviously seen the Humane pin fail horribly and eventually got acquired by IBM to do some internal AI for them.

There was this friend device. I'm not sure how that's doing. So it seems that many companies are doing the bet on physicality, but that's not proving to go very well yet.

So I would say let's wait and see how OpenAI handles this, let's move it into the physical realm. I'm not sure people are so open to having a new device, I don't see ourselves giving up phones very soon. So this means that you'll have another device to carry along with you and so on, so it's a high ask from everyone.

And then what is the value? Like if you can do something with that device that you can't do with your phone, sure. But what are real use cases you can accommodate when you're building such a physical device? I'm sure it's going to look great because Johnny Ive is there. Johnny Ive is also probably...

overrated like any star designer at that level. It really depends on what he can do with the team, with Sam Altman, within that entire ecosystem. So I would, I'm not very worried about that, but yes, voice interfaces have existed for a while. We've seen Alexa, we've seen many companies experiment with these kinds of devices that we talk to. And I think that's going to be an important part of the,

let's say, AI landscape in the future, but it's not going to replace computers or apps anytime soon. Although, yes, that's another conversation. What will happen to apps in 20 years from now? Will we have a single interface, one interface to rule them all, where we just prompt a need and then it builds an ephemeral, just-in-time solution to our specific need that dissolves after we're done?

There are a lot of scenarios around how operating systems will change and evolve. But for now, yeah, I'm not very worried about the GPT announcement. Thanks a lot. Yeah, you're welcome.

Hey, do you take another question? Yeah, sure. Let's do it. I hope you don't mind me asking. I was just curious what led you to leave your role at Miro? Yeah. Well, um, it's because I wanted to go into my own thing for a long time. I already started using my, uh, building my product, my own product, and I wanted to dedicate more time to that. I was also sort of burnout. So, uh,

Yeah, it's just a burnout in tech. That's very common. I should do an episode where I talk more about it. Maybe it will help more people. But essentially, I was doing too many things.

And I realized that I can't sustain that level of effort anymore and that I never truly gave myself a chance to pursue this entrepreneurial path where I'm doing consultants, I'm doing I'm still building products, but for my own company. And so I never experimented with that. And I felt like, you know what, I'm I've baked enough. This is the moment when I need to just rest and then see if I can do something for myself. And it's going great so far.

But I had to deal with a lot of anxiety and the fear of unpredictability, lack of stability and so on. It was a journey, a roller coaster. I don't want to keep too much with that. Right. Yeah. Best of luck with everything ahead. I'm sure you'll do amazing things. Thank you so much. Bye. Thanks.

Okay, so I can take one more question if you're willing to stick around. Somebody said that it's the pressure of performance review. Yeah, my performance reviews went great in the past couple of years, but it was the pressure of...

just doing too many things. It's really hard when you really want to be an active contributor, content creator, go to talks, do this, do that. It's just unsustainable. Okay, one more question. Corporate mindfuck, yeah, maybe more of that. Hey, Iona. I was just wondering, like, how do you think about accessibility in this new world of AI? Like, I'll

Obviously, we spend a lot of time in our roles like testing with users and understanding how it works with assistive technologies. How do we think about that piece? Yeah, it's a great question. I think it's still an afterthought, even if we've learned for so many years that the

systems we build are not accessible enough, we still think about it at the end. Oh, is this accessible when it should be at the forefront? Like part of that human-centered design philosophy should embed thinking about accessibility as early as possible. However, I am very excited about the potential of AI helping accessibility in products

We have seen applications where AI can really help people, even with things such as real-time translation, right? You're helping someone who can't understand the system, uh,

figure out what they're saying to them and so on. There are also more applications in this space like screen readers for people who have vision problems or tools that kind of describe the environment for people, again, with visual impairments. So AI can be used as an accessibility tool

And that's a very interesting path that unfortunately, of course, companies are not investing heavily in because it's not of capitalistic value. But I do feel that there are a lot of possibilities and definitely as an industry, we should look more at innovation.

not just how we're going to make these systems accessible, but what are some ways in which these technologies can help us make everything more accessible? And I think there's a very interesting conversation to happen there. So thanks for the question. Okay, I hope this was useful. And the seven habits of highly depressive designers. I will reach out.

Okay, so thank you all for joining. I hope this was useful. I know we had not a lot of time, like we're close to one hour and I feel like I had to cut through the things I wanted to share and talk about specific patterns and so on. I really appreciate everyone who stayed until the end. I appreciate your questions and the conversation that went in the chat. I will read it after the session.

And I'm very open to getting your LinkedIn messages, follow up questions. If you feel like you want to, you have something that's on your mind, send me that question. I really want to make sure that the course I launch and of course, everything I do on UX Goodies or wherever is valuable and is relevant to what's going on in the market and in our minds as designers. Yes. If I'm going to share the presentation, I have to check with Maven what's the path for that.

But I think it was recorded as far as I sensed. And I think I might be able to share it. Yeah. Thanks, everyone. I appreciate your thank you messages in the chat and have a great rest of your day. I'll see you around. Thank you so much. Thank you.