Research is essential because the AI industry often replicates models without questioning their effectiveness. Without research, we can't challenge existing standards or determine if they're suitable for our specific problem spaces. It helps avoid building the wrong models or blindly following trends like ChatGPT.
Ioana faced multiple layers of ambiguity, including understanding the company's processes, team dynamics, and the vast amount of existing research and concepts. She also had to navigate the uncertainty of designing AI features for a collaborative tool like Miro, where the interaction models were still being invented.
Ioana emphasizes avoiding AI for the sake of AI and instead focuses on solving specific user problems. She ensures that AI features are intentional and address real friction points, such as the cold start problem in Miro's canvas, where users feel overwhelmed by the freedom of a blank space.
The cold start problem refers to users feeling overwhelmed by the vast possibilities of a blank canvas in Miro. AI helps by providing an entry point, like the 'Create with AI' feature, which guides users on where to start and reduces the initial intimidation of the canvas.
Ioana explains that while many concepts were explored, only those that were technically feasible and provided a good user experience made it to production. Some forward-thinking concepts remain in the abstract stage due to technical limitations or the need for more research before they can be implemented.
The process involves balancing immediate technical feasibility with forward-thinking innovation. Designers must consider user problems, technical capabilities, and the future potential of AI. Prototyping and building real features are necessary to learn how they perform, as traditional prototyping methods are insufficient for AI.
User research is crucial for understanding how AI features are perceived and used. It helps validate or invalidate assumptions about the value and usability of AI features. However, research can also be ambiguous, with contradictory insights from users, requiring continuous questioning and refinement.
Sidekicks are AI entities that provide expertise in specific roles, like an agile coach or product leader. They help users overcome knowledge gaps in their teams and guide them through workflows. Sidekicks are designed to be easy to understand and use, with a clear value proposition.
The biggest challenge is balancing immediate technical feasibility with long-term innovation. Designers must think about what can be built today while also envisioning future possibilities that may not be technically feasible yet. This requires constant abstraction and tactical thinking.
Ioana advises designers to focus on asking the right questions rather than jumping to solutions. The AI space is highly uncertain, and confidence in a solution often means you're missing something. Continuous research and iteration are essential to building meaningful AI experiences.
I urge everyone in the industry to invest more in research because we might just be building the wrong models and then everybody's just borrowing them, taking them. OK, this is what ChatGPG does. OK, we are going to do it as well, even if it doesn't work. It's like the Amazon conversation from 10 years ago, right? I want to do what Amazon does or whatever Apple does.
So it's the same now with these new types of interactions. Somebody's making a decision and then we're, as an industry, replicating whatever ChatTPT did. And so what if they're wrong? I mean, without research, we can't really poke holes in their standards and the direction that they're going into. We really have to do research individually per problem space. ♪
Hello, everyone, and welcome back to Honest UX Talks. My name is Anfisa, and I'm joined today by Ioana. And we are going to talk about a very interesting topic because it's very practical and very real, something we don't do very often, but I hope we can do it more frequently in the future. And the topic of today's episode is designing for AI or designing for uncertainty. And my personal intention today is to actually interview Ioana and talk a little bit about how she was designing for AI specifically at Miro.
As you know, Ioana has been specializing in designing for AI in the last two years. And recently, I personally was watching the Miro conference canvas in New York. It was public, everybody was able to watch it. And since we all use Miro, it was really exciting to see all the new updates. And obviously, I was really excited because I could also witness the work you were working on, Ioana.
And most of us designers use Miro pretty much on a daily basis. So it touches us all. And I feel like we will be all experiencing what you have been creating. Basically, my goal today is to try to learn a bit more on how did you design for the uncertainty you might have been experiencing.
But before diving into today's episode, I also want to mention that this episode is proudly supported by VIX Studio. VIX Studio is the intuitive way for designers and agencies to build exceptional websites with full stack solutions, multi-site management, and built-in AI. And of course, AI is something I want to focus on for a second. One of the AI-powered features is called Object Eraser. We all know that sometimes even a minor design change is a huge hassle, especially when you're trying to be efficient.
Let's imagine you want to edit a photo. You now need to open a new software, edit the image, export it, shrink it, upload it, etc. It takes so much time. But imagine you can actually do this within the same website editor. Same capabilities, same experiences, just much more efficiently. With Object Eraser, you can actually now erase certain elements from an image or erase an area of the image to make it transparent within Wix Studio without leaving the site.
And the next feature I'm also pretty excited about is called AI Code Assistant. You can get tailored scripts, troubleshoot, and retrieve products answers faster. For example, you can give an API a simple request like bring data from this Google sheet to the site and it will do it for you. Or imagine you can now describe functionality, for example, create a counter from X date to Z date and get a code needed to implement this feature directly. The sky is the limit. Now you're becoming so much more
powerful and you can focus on creativity rather than figuring things out. I encourage everyone to go ahead and check out the VIX studio. You can find the links as usual in the show notes. And now let's continue to the episode itself. Let's catch up real quick. How are you doing? How are the things on your plate? Yeah, well, right. I'm very, very excited to talk about this topic today. It's probably going to be my favorite episode this year. I mean, what's more magical and
exciting and intense than unpacking the work you've been doing for the past year. It's going to be a very interesting reflection exercise. Although I do these reflection exercises every week, this time it's in public. So it's going to be furthermore interesting. It's adding a new dimension. I think I just had my first year Miro anniversary. We call it a Miroversary. And so it's been a very intense and productive
year and I'm happy to see some of the work I've been doing surfaced in the light and now I'm excited to talk about it and apart from that it hasn't been a spectacular week necessarily I'm picking up more sport in my life I went to my first CrossFit session which is very exciting I'm in pain now horrible physical pain but it's worth it and yeah just trying to balance everything one week at a time and that's it how about you how was your week
Oh, this is so lovely. I love to hear those side outside of the work type of stories because I'm not in a stage where I can still be happy about this.
My life is a mess, honestly. I think I was happy in the last few episodes during the September. I was like, yeah, I managed so many different things. And now I'm in a mode when I basically run out of energy. And of course, we all got sick after the trip we had. First, we had two weeks of sickness. Now we had a different sickness. Everybody's sick. I'm the only one who's obviously... As a mother, you have to handle everything.
While the father is trying to recover, you have to take care of the father of the kid. And you're also sick while also doing the work. So everything is on my plate right now. And it's really challenging. And so my brain is a mess. My life is a mess. Every day I'm just crawling from bed to bed, just dreading myself through the day. So I feel like it's the consequences of a very extremely, I guess, busy month previously. And this month I'm just in survival mode.
However, all good. Happy I can make it today to talk to you and learn a bit more about Miro. And also super cool to hear that you are actually right now at a stage of motherhood where you can take classes, celebrate anniversaries like Miroversaries. Enjoy doing new different things.
Something I'm looking forward to in my life as well. Maybe in a year or two, we'll see. But let's talk about designing for AI at Miro. I will have a bunch of questions for you. See how far we can get in the episode today. Obviously, you're saying you're reflecting a lot about it. So I'm hoping that we will get all those beautiful answers while I thought through it.
But we'll see. So my very first question to you, Ioana, would be really looking back. It's a nice trigger on nice, I guess, segue into the topic. You're now celebrating one year at the Miro. And I think it's a good moment to actually look back and how this year went through.
Let's start from the moment you joined when you were a girl, Miro. Obviously, new company, even though the product is probably familiar to you. But you have to learn now the internal kitchen. And on top of that, you're joining as a very first AI designer. So everything is new. Everything is absolutely unclear, uncertainty all over the place. And you're supposed to start figuring it out.
Can you tell us a little bit more about your journey starting out in that new team as an AI designer? Yeah, let's dive into it. I think it's going to be a pretty long answer. And my mind is already diverting into different streams. One is what I did and the other is what I should have done. But some things were, let's say, good decisions or the right approach and others could have been done smarter, more intelligently. And
And maybe if I have used AI to plan my process, it would have been closer to what I consider now today to be an ideal path I should have taken. So I will unpack some of those topics. You're very right. I think you gave me a great intro into the problem space.
It was a lot of ambiguity to unpack on different levels. So as you were saying, just figuring out how the company works, what is the company process? What is the team process? What's the ecosystem? What is the architecture of the team, the dynamics, the connections or relationships?
Just understanding the system I'm operating in was consuming and intense. And then also the thing with Miro is that because it's a, let's say, a place where you go to do thinking and a place where you go to collaborate, there was so
so much information captured across boards. You can imagine that at Miro, we use Miro a lot. It was just so much content, so much research that has already been done, so many ideas, so many AI concepts done by designers that were trying to give a helping hand to the AI team. And so there was just so much knowledge already existing that I was trying to soak in and
And it was an intense intellectual effort because I felt like some of the answers are already there, but am I getting to them, right? Am I going in the right directions to understand the valuable insights, the knowledge that has been already produced and the concepts that were explored? And so just...
That feeling that I shouldn't miss out on the experience and talent that was already kind of connected in a way to this experience, right? And so that was another layer, just processing the knowledge, processing the existing information. And then the third layer was obviously, what's a good AI experience in general? So what's a good AI experience at Miro? But what are the interaction models we should go for? Like we're essentially inventing how AI is surfaced on a blank canvas.
were inventing patterns, right? It felt like pioneering work at many times. Miro is now much more than a whiteboard, right? So we can't just call it a whiteboard. You can do so many things with it. It's essentially a place where you go to innovate. So it's more complex. It's becoming more and more connected with your system. It's complex problem space, right? So
How do you surface AI in meaningful ways when we don't really understand what are the relevant ways of using AI in general? Figuring out the ambiguity of the AI space in parallel with the ambiguity of AI within this very specific collaborative problem space was in a way two in one challenge on top of the other challenges.
Bottom line is that I spent my first year in a way adjusting or preparing and it's like in the past couple of months I started producing design and ideas and features and like experiments very early on so there is no time to waste in the AI space everything is moving very quickly and we want to learn and we want to try out things and we have a lot of ideas and we're very excited about these technologies and
the world that we can shape by the decisions we're making today. So there's a lot of excitement and we just want to do things and build things, right? But there's also like so many things that you don't know when it comes to AI. And we don't understand, like not us at Miro or my team at Miro, but generally the industry has a lot of open questions. Like the AI space is a long line of open questions.
What's the best way to surface this? Is it the co-pilot approach? Is it in modal? How should AI look? Is it invisible? Do we even say that something is AI at this point or are we already past that? So there's just so much that we don't know. We haven't figured out as an industry. And I'm talking about the intersection of design and AI in general across products, not just at Miro. So essentially what I've been doing for the past year is try to answer questions.
try to disambiguate. But I guess a point I'm trying to make is that in the AI space, we have more questions than any other space at this moment. And it's just a long line of answering questions. So this leads me into the second part of my answer, which is what is the approach I took? In the beginning, I think I'm a very keen observer of the model I'm operating in. So I join a team or I join a problem space or a project or whatever. And then I'm not very proactive in doing things
I observe a lot. I think there's a lot of value in acting as the silent, just asking questions and then trying to figure out how things work and then starting to slowly get involved and participate. And of course, I didn't have this luxury in this role because we really needed to move fast. So there wasn't that kind of long, long, when I say long, I mean a couple of weeks observation period where I'm just learning. Right.
But I tried to do that as much as possible. So before I would jump and say, you know what, we need a chat for AI, or you know what, we need to bring AI to the canvas everywhere. Like you have to have these AI things pop up everywhere. Before I could suggest anything, I really took the time to learn as much as I can about the pain points that people have in Miro. So we could actually bring value with the help of AI. So I know that something...
I'm very passionate about as a mantra is avoid at all means AI for the sake of AI. I think that everybody's driven by this technological enthusiasm. We can do this. We can do this now with technology. Let's just do it. Yes, but does it solve any problem? Like, is it useful or is it just plunking AI in people's faces and then they see sparkles everywhere and they're like,
this doesn't really help me. And it feels like it's such an experimental feature and it doesn't even work. And you're just consuming my time. You're asking for high effort, prompting to do something for me that's not even good. And so we don't want that.
right? We want to make sure that whatever we offer as an AI experience meets a level of quality that, of course, AI is an infant technology. Few products using AI are there at this point, right? So everybody's improving, growing, learning, evolving very quickly. But there was this theme that I always had in my mind. Let's not do random things. I mean, random things have the quality of helping you learn. But when you're intentional and
you're always looking at their users' problems. And I know I'm saying something that sounds very basic, but you can really get swept away when AI is here. Like people interact with AI in the GPT mental model. Let's just do a version of GPT. Right. It sounds like it's the thing that people would resonate most naturally with. They would feel like they already have that recognizable feel, right? They understand how that works because they're using GPT.
But is GPT relevant in Miro? Like, is it something that they need in the context of creative work or collaboration in general? Do we want them to collaborate with AI or do we want them to collaborate with other people? So it's just been a long line of big questions, small interaction questions, experiments, but always keeping active.
everyone and myself in this kind of responsibility seat is this ai for the sake of ai or does it really solve a problem because if you solve a friction point if you address the cold start problem like we're doing with generation right with ai generation we have an entry point it's called create with ai and the bet there or the problem we're trying to solve is the problem of the cold start when you come into a huge real estate where you can essentially do anything you
it's just so much freedom that it becomes overwhelming in a way right it becomes limiting and so this is what a canvas can be sometimes it's just intimidating okay so where do i start ai is an instrument that could help solve that problem okay where do i start hey here i'm ai
like not in creepy vein but you get me not necessarily personified but like the surfacing of ai as a way to overcome this problem is a real problem to solve right well and then there's an endless conversation the problem with the ai space in general right so you want to enable people to overcome the cold start problem but at the same time if you do that by asking them to write a very long prompt
then you're running into another type of problem, which is the articulation barrier. Not everyone knows how to prompt. What is the right way of prompting in this context? And so it's just navigating a lot, a lot of open questions is what has been the reality of my ear and then trying to build clarity. And sometimes it's,
We succeeded and sometimes we learned and we keep doing that. And people are responding well to some of the AI features and others need to evolve. And we're constantly looking at what's the next experience. And so I feel that one of the most important points.
parts of the design process in general is reflecting on how your design decision performed in the wild. So, okay, I made this design decision and then most teams or many teams, I don't know why this keeps happening, they jump into the next feature. What's next? What's on our roadmap? What should we do next? And then everybody forgets to actually measure the impact, the value. What's this meaningful thing?
this solve any problem? Is anybody happy when they use this? Like you really have to do that with AI, especially since it's really hard to even assign metrics to it, right? So if everyone uses AI in your product, is that a good sign or is it a bad sign? What does it really mean? So it's really ambiguous. So then you really have to double up on research and interviews and understanding from real people how this experience feels.
Right. Because in conventional systems, you really control the experience and you know if a user achieved their task because you design it step by step. Right. With AI, you don't know if the user is happy at the end. You don't know if they got what they wanted.
you can't really measure that. It's really hard. I mean, you could have a triangulation of metrics or some sort of indicators. Sure, you can put in mechanics for understanding if that's what they wanted, if understanding if we captured intent correctly and we executed on that intent correctly, and then the user got what they want. But it's much more ambiguous. So there's a huge need for research and reflecting on everything you've done. Again,
now moving even further on the ladder of abstraction my goal and i'm still executing towards it was to understand what are the important questions that we need to ask admiral when it comes to ai what are the right principles that we always need to apply when we're making a decisions so build a framework if you want a framework for thinking about ai and this is again it's work in progress
The market is evolving. Technology is evolving. We're seeing so many things change from one month to another. This is a growing, a living organism, right? It's not set in stone. It's not like you're done. This is how we think about AI and let's go home. You have to constantly revisit everything. So to sum up, and I don't know, I hope I gave some answers. We're really trying to bring real value to our users. And everything we do is very intentional, which I feel is a trademark of a good job.
in the age of AI, and we really learn a lot. I think across the industry, mass learning is going on. And I think we're even learning from one another, right? AI experiments that other products are doing, maybe Loom, for example, is a product that I'm looking closely at. I don't know. It's not necessarily related to my work at Miro, but I'm learning from what other companies are experimenting with and then sunsetting. And then it's really just a collective industry level learning effort.
and I'm experiencing it as a participant. And I feel like we are lucky to have you as someone who's designing the product for us and protecting our productivity, not just designing the feature for the sake of yet another feature. A lot of companies, in my perception, have been going down that path. Yeah, we just need to be the first one designing AI and putting the price tag on it and upselling because it's a shiny thing everybody wants and we will quickly be able to sell it, right? And I feel like it was great to actually have you in that seat.
at Miro to be able to really ask those challenging questions as to do we need yet another chat GPT within Miro, right? At this stage, we all understand the value of challenging back those sort of default answers. And I also like how passionate you are when you're talking about this. Like clearly you have so much thoughts around it. You've been soaked in this industry for a while now.
It's clear you basically can speak about it for ages, I can see. And looking back into the story, and actually there's still a lot to unpack here, I have a quick question. So it seems like you were enjoying your blissful ignorance in the beginning because you did mention, as you were still exploring and understanding where you are, what's the product, what are the principles, who are the people, how things are built here, you said you were still doing a lot of experiments. And to me that sounded like, okay, okay.
As I'm learning, I have this kind of ticket of, I don't know much. Let me just explore and be wild with my ideas, create a lot of, I don't know, concepts. And then, obviously, as I will start, like, I don't know, talking about them, I will see how many of them are feasible, what we can build, we cannot build, etc.,
So it seems like you were enjoying it in the beginning, as you mentioned, you were exploring different concepts. My question to you is actually how many of those concepts you were sort of blissfully, ignorantly creating that actually made it to the production? Or at least there were the seeds of the features that we can now enjoy and see and, you know, use in our daily life. It's a very interesting question. And I think the answer is very complex. The thing is that sometimes you
you want to connect the intersection between a real pain point, so a real user friction, and then the technical feasibility, and then how does the solution actually look like? So there's this triangulation of what's the user's problem? Okay, I think we can solve it with AI to what level of confidence or quality? And then how should this look like? Is this the right way in which we should design it and so on? So you're always playing at the intersection of feasible, where the technology is at today. But the
problem in the AI space. And I think it's very unique is that if you don't break those boundaries and think about what's not feasible, think about the next stage, not next stage, but 10 years from now, where will this technology be? You can't really be innovative. So for me, what was really hard was balancing this kind of abstraction level, right? So what can we build tomorrow and ship
and then feel that it's a good experience for AI because it has the quality, it's surfaced in the right interface, right? And then you can't just stop there. So if you only do that, you're reactive, right? You're responding to what's obvious in a way, right? And if you want to really innovate or set standards or define what the experience of a canvas with AI will be, you have to move at least
once per week in your own space and focus time and self ideation session move beyond the things that you can properly deliver today and think about what you will be able to deliver one year from now two years from now and it's in conventional in the command based ui paradigm that we used to be in and we're still in for most parts you don't have that problem you can't imagine anything because then you can build it and so that's fine with ai some things maybe we might never see work really and
I had to constantly move between tactical thinking, immediate thinking, and then what is the long-term thinking? Like what is a real magic that AI can, when it gets better, bring to our users? And so to answer your question, some of that abstract thinking, big picture, forward-facing, futuristic concepts, they're not seeing the light of day, not because they're not good or
I mean, there's two reasons. Technology can't support those concepts yet in the proper way. And because learning in the age of AI is also much more complicated than before. So before you would design a solution and then you would prototype it in Figma and it would behave very similarly to the real experience. And with that prototype, you went to the users and they click through it and then you would spot usability problems or the way they comprehended, right? The comprehension level, you would build confidence and ship it.
With AI, you can't prototype in the classic way anymore. You really have to build it to understand how it works. So we might have the best UI
why, right? A very nice interactive flow. But if the technology isn't delivering proper outputs, then what are you measuring? What are you learning? You're learning if you have usability problems. Most probably you won't because all the AI experiences are not very sophisticated necessarily from a usability standpoint. They're mostly input fields and you have this generate button. So it's in a way traditional interface elements, right?
But the key is learning about what is a potential growth from a technical standpoint. So you have to learn by building. And so that also takes a lot of time. So for those forward thinking concepts, we can't even learn so much about them. We could expose users through some concept testing. Here's what we're thinking. What do you feel this is? How does it resonate? Right. So we might elicit some emotional reactions or gut feeling feedback kind. Right. But we can't really learn about those concepts. So a lot of that isn't being shifted.
for the reasons we can learn about them, it's not technically feasible yet or technically there. The more tactical things are being shipped and we are always trying to adjust them and evolve them. And so I see a lot of my work in the product. So I don't have that feeling that, oh my God, I'm putting so much work and it's just being wasted and put in a parking lot. And when are we going to ship all that? Like, I don't have that feeling. I have the feeling that we're delivering
a lot of things in the AI space and we're moving at a pretty good velocity. Of course, it can always be better, right? But it's pretty good and we're doing a great job and I think we're learning a lot. But there's also that abstract thinking that hasn't been reflected in the product yet.
In a way, that's the work I'm most proud of. Because when you get to think about that big picture thinking, big concepts, big innovation, disruption, and all those things, those are the interesting, really interesting problems. Like, of course, I can solve for a friction point. And it's massive in a way. Even if it's small, it makes an impact. But when you think about the way we can rethink collaboration with the help
of AI. That's a huge concept and it's exciting and I would love to be able to show that but we're just not there yet as an industry again not necessarily as a company like technologically speaking and yeah it's I think the most interesting parts of the thinking I've been doing and designing for the last year haven't seen the light of day and the things that have seen the light of day are the things that we're confident that can be a good experience today and that's why they're shipped.
because AI is still an infant technology and people are forgiving with AI many times. Like, oh, yeah, this is experimental. It says beta, so it's probably going to fail or break or crash or whatever or say something that's a hallucination. And so we understand that these technologies aren't there yet, but we won't be forgiving forever. So we really need to put
things in there that don't disappoint. Because if I put an AI shortcut in the Miro canvas and the user clicks it and they get something completely different from what they expected or something that's at a really low level of quality, they might hit it twice or the third time, but the fourth time they will not push that button anymore. And so you really have to make sure that you're delivering value even with these small tactical decisions, experiments, features, and so on. Yeah. Which reminded me the story of Figma, which we also discussed a few weeks ago. Yeah.
Like when you're delivering something and then suddenly it just copies. It's just in your eyes and everybody's like making fun of it. Yeah, you're right. Well, I do have a bunch of questions, but I'll try to be more strategic in how I'm asking them. The things that I like a lot that you started talking about were...
The surfaces of AI, to be honest, you did talk about it a little bit already, is it should be yet another chat GPT or yet another Clippy. We know, we saw that Notion recently introduced their version of Clippy. It seems to be like this chatty thing, right? A lot of the people are experiencing AI today in the shape of chat. However, what I liked a lot, obviously, Mirror does have a component of a chat thing where you do the input and you get the output. Great.
What I really liked about this recent launch at Miro was those sidekicks and the idea of using AI, but just in a completely new shape. You know, you already have in this template, okay, there's a chart, everybody's using a chart, I'm going to plug it as a chart in my product as well. Yet, you have seen the new shape of it with Miro recently. And I personally was mind-blowing, not because of the concept and the name, which I loved,
But also the fact that it produced, like you were just saying, quite qualitative outputs. Like it gave me some ideas and it saved me time. And I didn't have to spend a lot of time brainstorming, going for quantity. I could actually focus on selecting the qualitative output. So I could prioritize rather than generate. And I really love that.
And yeah, the shape of it was completely different. And for those of you who didn't experience yet a new feature of AI at Miro, Sidekicks is basically asking for the feedback from different roles, like marketing or a leader, and they would generate some ideas based on your concepts, ideas. And I wanted to ask you, my question would be more along the lines of having those new shapes. Like, what was your process of generating those new shapes, surfaces, and concepts around using AI going beyond the chatting thing?
Yeah, great question. Well, I think it comes back to that idea of just solving relevant problems. I think AI can do a lot of things, but it can't really understand the problems that we have to solve. And we're still very well positioned to understand what problems are worth solving or what problems are urgent. And so you start from that.
Sidekicks, I think, is one of the most popular concept or feature from this year at Miro. And I think it has to do with the fact that they're very easy to grasp, they're a friendly concept. The value is implicit, it's self-explanatory. Okay, so now I have this AI entity in the form of an agile coach or a product leader and these entities.
come in when I need that kind of expertise, which I might lack in my team, maybe I don't have an agile coach. And then they bring in that perspective to my thinking process to my workflows. And it's just very easy to grasp what the value proposition is. And then it's very easy to understand how to use them and when to use them, right? So the thing with AI is it's really hard to even like a user, I don't know what are the right moments when I should go and
call someone AI in my work? Like, when do I bring it in? Is this the right moment to bring AI in? And as a, let's say, design or product team behind it, do we want to just push AI when we think is the right moment? Or do we still want to let the user decide this is the right moment for AI? So it's a very nuanced and complex conversation. So the thing with the sidekicks is that you can call them very easily whenever you feel the
fit, right? So there's a level of control. And again, this is a very interesting and long conversation. I'm curious to hear really like what inspired you mostly. Obviously, it's a lot of input and thinking and conversations. And that's already clear from our chat today. But what I'm thinking is like,
I'm sure you had to go abstract. You had to think beyond the input that you were given from users, from obviously, you know, classic story, right? Yeah. If you ask users what they want, they will tell you they want to use the courses. So like getting into that abstraction level and arriving to new ways of using it, surfacing it, that's what is very interesting to me right now. I have to make a very important pause and disclaimer here. It's not just me that I'm thinking about these things, right? So I'm the AI designer and I help surface the...
visualize these and think about like, I'm, let's say, an expert designer in the field, but many designers are contributing to their problem space. Like, let's say they work on comments in Miro, right? They're thinking about how they can enhance comments with the help of AI, or they're working on, I don't know, templates. So they understand that problem space much better than I can possibly go into depth of every component that comes together to
build Miro. So every designer is in a way employing AI experimentation and learning and thinking in their work. So I don't want to take all the credit. And same for Sidekicks. This was an idea that someone had in the company, and we tried to like poke holes in it, improve it, run with it, experiment, learn, support them, and so on. So it's always, it's a lot of people that have been involved in building these features. So it's not just me, and I don't want to take the credit for all of these features, but we are the
official AI team and I'm the official AI designer but in many ways everyone is contributing to this experience because otherwise it wouldn't be possible like the AI team can't really build enough domain knowledge for each of the problem spaces like searching or templates or different tools like
presentations and whatever. So everybody understands their problem space and what are the friction points and pain points and opportunities that AI could be a tool to solve for those problems for. Yeah, so Sidekicks, it was, again, someone's very bright idea and it was very quickly prototyped and everybody responded very well and we improved it and we gave it a
identity if you want. And we kind of thought about what is the right surface in which we want to surface this, right? Is it through comments? Is it through producing things on the board with a collaborator? Like if we want to bring these entities in a more personified human-ish vibe, should they behave like collaborating on the board like a person would? Or is that too intrusive or overwhelming or not the kind of ethos we want to infuse in our product?
Like AI is the same as your teammate. Maybe it's not. And you want to keep that distinction, right? So those are really big questions that we still learn about and we're still operating with. And then also, I think a very interesting point about services is the opportunities when you're working on a product like Miro are infinite.
literally Miro is real estate. It's a blank canvas. You can surface it in so many ways. It's really not constraining, right? So the limits and that is in a way a luxury or a huge opportunity, but it's also difficult, right? When you can surface it in 10
possible ways because this product is flexible and open-ended and it can allow for any type of interaction. How do you choose one over the other? Especially since I've mentioned earlier, learning is difficult. So you really have to build things to understand how they're used and what the experience is at the intersection of what technology can do and how users are using them. So it's very, very tricky to answer the question, what's the right service?
And so now we're experimenting with this, let's say, more classic paradigm, which is the panel. And you go in there and the experience is controlled and it's not messy. It feels clear. But there's immense open-endedness, like the potential is going much further beyond that, right? And it's an open question.
Yeah. And I think, again, I keep coming back to this idea. If you're working in the AI space, what you should be doing today is list the questions, not the things you want to do, but the questions you want to answer. Because otherwise, it's just like the uncertainty will be there. If you're very confident about something, you're probably wrong. So start by figuring out what are the
important answers that you need to get in order to evolve the AI experience in your product. And it sounds like it takes a lot of critical thinking to understand where to derive those answers. First of all, understand the question, but also where to find the right answers and how to interpret them in a meaningful way. And what I would be also interested in hearing from you is the user research part of it. I understand that in the
There's a lot of context that you can just understand and hear and kind of learn from others. However, user research sounds quite interesting for me when it comes to AI, given that you cannot ask people like, hey, what kind of AI do you want, right? It's obvious. Probably.
you shouldn't ask this question in any industry. But given that AI is very unspecific today and everybody uses it also like a con, but you just don't know how people are using it. We all have those masters of prompts. Everybody's trying to figure it out and do it in their own way. And okay, yes, it has certain patterns that people are constantly praising and talking about online. But at the same time, you can't easily do the research on AI if you want to design what you are saying for the future, for the innovation. I would be really
curious to hear about your mindset when it comes to user research for the AI. Well, this is where I think we're very lucky at Miro. We have a stellar team of researchers, very talented, very senior, very seasoned professionals in the resource space. So we are able to run very, let's say, tactical studies where we just look at a very specifically
problem and we can also run more let's say abstract intellectualized kind of studies where we look at bigger problems and concepts that are complex and we have that range of knowledge that of course this is also something that you have to be very intentional about so you can't constantly like what do you really want to know i mean what are the right questions that we should prioritize in the next
couple of weeks of learning so there is also like you can't learn about everything we can't do everything we can't prototype everything you have to be intentional so you have to understand what's the question you need to answer next because then there was going to be another question and there's going to be another question so in a way there's like a higher need to prioritize when the ambiguity is so big and the open-endedness is so big and the potential can be anything like
what do you want to start with? And then you take it from there and then you take it from there and then you have a North Star, right? You know what will be kind of a rough idea of the ideal experience. But up to that point, it's just in a way validating, invalidating, which are terms I don't appreciate. But that idea of is this working? It's not working.
why is it not working is it because the way we surface it in ui or is it because the value prop is wrong to begin with so just constantly understanding laddering questions on top of the other is coming back to my first point is like how things should be ideally done sometimes we're stuck in like circles let's do this and then we change our mind and then we go back to low i think that was initially we had the good idea and so it's just this messiness of the process right
design processes are non-linear in general. In the AI space, you can imagine there's even more than in other problem spaces, a back and forth and hesitation and ambiguity and taking risks and now taking a leap and then waiting to see what happens. It's very, very fluid. And I hope I answered your question. But research plays an important part. I urge everyone in the AI industry to
invest more in research because we might just be building the wrong models and then everybody's just borrowing them, taking them. Okay, this is what ChatGPT does. Okay, we are going to do it as well, even if it doesn't work or it works. It's like the Amazon conversation from 10 years ago, right? I want to do what Amazon does or whatever Apple does.
us, right? Those are... Let's Uber it, let's Airbnb it. Exactly, right? So it's the same now with these new types of interactions, patterns, models, this new material is being borrowed from whatever somebody had the courage to do, right? So somebody is making a decision and then we're as an industry, in Miro it's very specific, we have a very unique problem space, right? But I see many companies just doing, replicating whatever ChatTPT did. And so
What if they're wrong? I mean, without research, we can't really poke holes in their standards and the direction that they're going into. Maybe they will retract that, right? So Notion, for example, they changed that.
the way they think about AI and the way they approach AI. And they evolved a lot. Like we really have to do research individually per problem space because the standards are not there. The clarity doesn't exist on the outside. We have to build it inside our teams and companies and problem spaces. It's a lot about understanding the complexity and something we constantly talk about here also in the podcast room.
I have another last question, just to make it very specific to anyone who's still very curious to hear stories from your experience. Maybe you can think of a learning that you had in the research that changed the perspective, opened your mind. Like, was there any interesting research story, how you learned something that was either invalidated, validated? And what are the questions you ask, right? To not have, you know, faster horses.
I wouldn't go too specific in that question because I feel like am I crossing boundaries of the internal knowledge? But I do know that one of the things that I can comfortably share is that even in research, you'll have a lot of ambiguity.
So many times we have, let's say, revelations from the most recent research study. Oh, my God, that's so clear. And then we make a decision. And then when we go learn about that decision, we sometimes get contradictory insights. And it's because, again, there's so much ambiguity that even on a personal individual level, a person that comes in to interact with AI will have contradictions.
conflicting reactions many times, right? So they will say, I love prompt suggestions, but I actually don't like prompting. And so that's an example, right? Sometimes even the research feels ambiguous. But my point and what I want to encourage everyone to do is just more of that, right? So sometime the light comes in, the clarity starts building up and
dominant themes are gonna start surfacing. And we've seen that, right? So some things we're not very confident about one year later. Other things, we're getting contradictory messages. We still feel like they're ambiguous, but that means that they need more questioning. So yeah, it's a
process. It's very exciting. Yeah. Yeah, I agree. It's interesting because indeed it takes time to arrive to a confidence, but without asking questions, you can get there. You're just there with your own assumptions and you have no idea what you're building actually even makes sense. So we end up with another chat deputy in every product. All right, let's wrap this conversation. I hope it was useful for anyone who was listening. I think like this conversation was maybe not as tangible, but it was definitely full of almost like little fires.
It's just like a little thing, seed of thought, but it opens up a huge wire of thoughts that we can all go back and think on our own as a homework. But thank you so much, Ioana, for encouraging us to do the homework actually and to always challenge it. I think you're a great example on how to design for AI. We're lucky to have you in the Miro and I'm looking forward to seeing more of the product updates in the future that will enhance our lives, I hope.
Yeah, yeah, yeah. Stay tuned, everyone. It's going to be interesting. I think we should be getting ready for very interesting times. Technology is evolving. Our understanding of how to leverage it in a positive, meaningful way is also evolving. So that evolution that informs us.
different layers of the experience, right? So technology is better. Our understanding is better. Our enthusiasm is still there. Users learn how to use these new tools, these new instruments. So everything is progressing. And if you think about the early days of the internet, it was so silly, right? We didn't know what to do with it. And now we're doing so many things with it. And it's the same with AI. And it's the same with AI at Miro. So stay tuned. You're going to see very interesting things coming up. Awesome. Thank you so much. For anybody who listened up until this moment, thank you so much.
Hope you enjoyed it. If you did, please rate us at any podcast of your choice. We would also appreciate if you can help us with the next topics, the next questions we should be answering in these conversations. And other than that, thank you again. Definitely check out other episodes and we will see you in the next ones. Bye-bye. Thank you so much. Bye.