We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI, Marketing, and Human Decision Making // Fausto Albers // #313

AI, Marketing, and Human Decision Making // Fausto Albers // #313

2025/5/14
logo of podcast MLOps.community

MLOps.community

AI Deep Dive AI Chapters Transcript
People
D
Demetrios
F
Fausto
Topics
Demetrios: 我观察到营销部门利用AI将产品图片无缝融入Facebook广告中,虽然效果显著,但流程相对手动。我设想未来能建立自动化流程,持续监测最佳广告,自动替换产品并在多平台测试,实现更高效的广告优化。 Fausto: 我认为生成式AI的关键在于与现有技术的结合,例如大规模A/B测试。过去我们需要手动创建内容再进行测试,现在AI使我们能够针对个体用户群体进行精准营销。生成式AI让我们能够创造我们所设想的世界,将创意转化为现实。

Deep Dive

Shownotes Transcript

So my name is Fausto. I'm an absolute enthusiast in live innovation and technology. I like to connect seemingly and despairingly ideas. And most of the times I'm dead wrong, but sometimes I'm not. And at least we can have an interesting conversation about it. And that is the very human quality that you'll see in this podcast that really counts.

We are back for another MLOps Community Podcast. I'm your host, Demetrios. And today, we're doing a session that I like to call Fridays with Fausto. He has...

become one of my good friends and every time I talk to him I thoroughly enjoy it. I love talking to him because he is very transparent about how he's building, how he is trying to use all the new stuff that's coming out in the world. We get into some of the topics like the new image generation models and of course everybody's favorite MCP. Let's jump into the conversation now and as always...

Give us a review, leave some stars, do what you can to help the algorithm know this is awesome. Wild. The ImageGen stuff is absolutely wild. And we wanted to talk about this before even OpenAI dropped. And then I found ReveAI, or if you put it all together, it kind of looks like it says Reveal.

which I have no idea how that's working. So first thing on my mind with the image use case is that I've been seeing a lot of folks in the marketing department take Facebook ads and then they say to their prompt is basically use this photo, but put my product, which is another photo in the ad and add like this text and it will do it so well.

And that is highly valuable in itself, but it's quite manual if you think about it. So I'm almost extrapolating it a few steps forward and thinking, you know what? What about when you set up a pipeline that will just continuously look for the best performing Facebook ads or the best performing ads on different platforms? Exactly. This was just, I was thinking about that, man.

And then you're taking those, you're just swapping out your product, and then you are throwing them in and testing them on different platforms and seeing how well they work for your specific product. Yeah, it's like real-time generation. I mean, in anything that we see in Gen.AI, I think an interesting thing is...

Like not just on itself, but how to combine it with existing technologies. And in this case, A-B testing at scale, right? Yeah. And before we kind of had to make all the content ourselves and then test it. And that is still sort of aiming at user groups. And this makes it possible to aim it at the individual groups.

But one step back, and maybe from a bit of like meta perspective, like what Gen-I, what generative AI lets us do is to create the worlds that we are envisioning. This was literally the first thing that I thought when I was introduced to GPT in GPT back in 2022.

I was taking a walk with my girlfriend to the forest and I couldn't stop talking about like this idea that we're entering a world where anything that you can envision can be created in the digital world and therefore has an impact on the real world. And so that is still hard, you know, even when you know what you're doing, because to translate your creativity into

to anything useful. I mean, your concept is as strong as your ability to explain it, right? To explain to someone else or to an AI. Now,

Before we're going to touch, I'm sure, a lot of interesting topics, but one of those things in the AI ecosystem that can really help is like Super Whisper and Wisp Flow and all these kind of things that are essentially helping you to give words to your concept. So that's one. I mean, you could do that iteratively with any JGPT kind of tool as well.

But yeah, I think that's very interesting because sometimes you sort of have this vision, almost this not just urge, like, oh, this is like the app or the thing that I want to create. But, you know, you don't exactly know how to put words with. And it gets more and more important to do that.

whether it is to prompt a image generation model or in a more complex workflow where this gets integrated, right? Because, you know, making images is nice. That's one thing. But to be able to integrate it into a workflow, say you're building an app with a sick front-end in Cursor, then first of all, you should have your, you know, your idea created

in an intricate project description, which is where AI can help you. But then it's extremely important because AIs are absolutely fallible, right? To feed the right context information, what to do, what not to do, what libraries to use at the right time. So at inference and

with coding IDEs like WinServ and many others, they let you granularly adjust this and then image generation becomes one part of that workflow. I mean, I didn't check, but the last MCP I was using for image gen was Stability AI, but most likely there's already an MCP out there, right? For the new OpenAI 4.0 native image generation. And

So, yeah, so to make a jump around how I was using it, I was building this website and I wanted to have a chat box between two elements. And instead of going to Excel, you draw and draw it and then point at it and then feed it back to the AI. I actually had it

I made a print screen of the current state. I fed it to GPT. I said, I want a chat box in between. Design it nicely. And it came back. And all the text, everything in that whole wireframe was exactly correct with the chat box added. Then I added back to cursor. Then I call my 21stF magic front-end MCP tool.

draws like this element, chooses by itself, injects the code into the workflow and voila. Well, let me tell you how I'm thinking about this because you're doing it. And again, this is incredible how you're seeing like lift on your workflow in a way. But I think about it as a platform engineer or an ML engineer that is trying to create business value for their company. And

They're looking for use cases that can add significant value for the company, right? And so I can imagine if you're a good ML engineer or engineer, even product, it's probably more of the product folks that are going out there and they're talking with the company and saying, hey, how can we plug in AI to your workflow? What are the things that you do on a daily basis that are time consuming?

And somebody in the marketing department says, well, I'm constantly doing research on ads. And then I'm asking our designer to create different variations of ads. And then I take those and I add them to Facebook or I add them to XYZ. And with Facebook, it's incredible because the creative, the actual image is creative.

you're targeting in a way these days. You don't have to be, it used to be where you would have to try and figure out the exact persona down to their birthday, their neighborhood, that type of thing. But now it's not as much that because the ads algorithm is so good that it can look at the creative and know, all right,

Let's show this to these people and then let's see what kind of signals we get. And then we'll show it to similar people. And so the marketers say the creative is the targeting and the

Because of that, like we were saying, you can just A-B test so much more and you can create so many more variations. Whereas when you were bottlenecked before and saying, I can only create 30 pieces of creative per day and we're paying quite a lot for that whole workflow in time. Now it's like I can just have the human looking at

All of these different creatives that are being churned out continuously and making sure that the text and the words are correct or there's nothing funky in it. And then you can imagine the workflow is just clicking a box and pushing it right to Facebook. Yeah. So there is still the human in the loop, right? In this paradigm. And I think...

What is interesting to me is that with every new innovation, there is, as now, like the talk about how is this going to be implemented? Is it going to be a human in the loop? Is the quality going to be good enough? And how is it going to affect industry XYZ? Now, those are valid discussions to have, but

I think that it's also interesting to look how it's going to not influence the current state of the world. Like, for example, take marketing content. This post of this dude went viral on, I saw it on LinkedIn, but he created sort of like Don Draper style advertisements. And he said like marketing ads, it's so over, right?

And then I thought, yeah, it may be in the short run, but in a sense, marketing is about influencing humans on an emotional level in order to, you know, manipulate decision making, right? Yeah.

Another interesting development in AI, and actually Cass Sunstein, one of the authors from the influential book Nudge, Nudge Theory, published an interesting paper last year called Paternalistic AI and Choice Agents. It was picked up by Nature. Google it. It's interesting. Yeah.

And where he argued that AIs are, AI agents are efficient decision makers, right? And we're going into a world where AIs are making decisions on behalf of their users. Now, I think we can see some early stage examples from this already happening. But that essentially means that marketing traditionally tries to influence the end user, the buyer. But what if

The decision maker, maybe not the end decision maker, but in large part is an AI or a set of AI agents. And then how...

is the traditional marketing still useful if people make their decisions on what their AI agent is basically suggesting, right? Because we know that we're manipulated. We know that we do buy things that we should not. So I think

I think when marketing becomes so much more effective, you're also going to see a backlash, especially within the younger crowd of Gen Z that are saying like, all right, I'm going to sort of like offload my decision making because I'm making too many decisions. I don't want to be manipulated, especially by, you know, more and more capable AI.

And people understand that stuff as well, right? So then I think that the whole sort of like Don Draper style advertisement may be dead, yes, but in a different way that they are suspecting now, not because the images are getting generated, but because...

it's going to be a whole nother way. Like they probably need to find a way to influence the descriptions and all the information that a buyer's AI would spider to sort of influence that, if that makes sense. It's almost like the tactics aren't,

going out of style anytime soon. They're still going to be useful, but the way that we implement those tactics are going to be different. Those are fascinating things to touch on. But I also wanted to steer the conversation in the direction of MCP. And just because I've heard some folks in the community talk about how MCP

useful it is but also how it's like huh I don't know if I really get it and then most recently I have been reading a thread in the community from Meb who started this and shout out to Meb who's like

talking about using Anthropic API and just having the worst time ever. So I know they're having some uptime issues over there at Anthropic. Godspeed to them. Hopefully they get it sorted out. But it's like you want to try and build a service. And there's this cool thing that now has a lot of attention like MCP.

And next thing you know, you're trying to integrate it in. And there's these bottlenecks in your stack where you're recognizing, like, I can't have this as a service for my company if we have specific SLAs. And that's not a specific MCP thing. It's more just like a general system thing that where MCP might not be the

point of failure, but then you're recognizing like, oh man, I've got this. I'm relying on Claude 3.7 and it's just not working.

Now, of course, Anthropic came up with the multi-context protocol as an open thing to use, right? I've even heard that OpenAI is looking to adopt it as well. So those are two separate things, right? You're talking about the uptime of the model. I think that is something, I mean...

As developers and people that use AI for coding assistance, we've been relying on Sonnet since that 3.5 release upgrade, I think in autumn 2024. It's just by some margin the best. So 3.7 was much anticipated, 3.7 Thinking, 3.7 Max now in Cursor as well. So I feel it might just be sort of overload use,

and honestly I have been actually since I upgraded to a cursor dubbed

0.481, I think it was, like the last update, still a little buggy. But they introduced an enhanced and improved model routing system, which is on itself something to be seen in a bigger scheme as well. Because, again, here, like...

Making choices is a hard thing to do. Making informed choices is a harder thing to do. And to be using the most efficient, most capable model for whatever your use case is, that's a hard thing to do. So we probably as a whole tend to use too much force. Yeah.

And I'm sure that Cursor will improve this capability and also then sort of walks around the problem if models have downtime. Because there's a lot of models that we can use. I mean, DeepSeek V3, the new one. Gemini 2.5 Pro was... I just saw it pop up in my model list in Cursor. And all benchmarks, right? Yeah. And the context window. I mean, because...

In the beginning, we were just using it for scripts, then a few scripts in your project, but especially when you're integrating backend and frontend, there can be hundreds of different files and all sorts of dependencies that you're relying on. So the context window becomes really important. And I'm curious to see how Gemini is going to change that, if anything. Yeah, because now we know that

you might see it does so well on all these different benchmarks, but that doesn't mean much. It,

is a cool thing to use in your marketing material when you come out with this and it's great for people to share online. But until you actually get your use case in front of it and see how it works and see how or even your workflow and see how it integrates with what you're trying to ask it, you don't have the best idea of what it's actually going to do. Nope.

You know that I used to own a restaurant, right? Yeah. And it was inspired on New Orleans, Louisiana. Because I thought that history was amazing because creativity was born out of scarcity, right? The kitchen, the music, it was all from very little.

And scarcity is the mother of innovation, right? And at the moment with AI, we're in an age of abundance. And we're really waiting for another crazy good model to solve our problems. Instead of being really creative and seeing what...

we can do. For example, like in what intricate way you could use MCPs and rules, in cursory I'm referring to, but in general, it's more about making all these different calls and making sure that the right information is being sent at the right moment. And MCPs are a way that you could orchestra this. Rules are another way. But, and like,

It feels that we're sort of spoiled and it takes some focus of our ability to creatively come up with problems and really think hard and deeply and longer about a certain problem with a certain model.

And because there's always some, it's like the grass is always greener. Yeah. Well, maybe if this model can't do it, then we'll try it. We'll swap it out. Yeah. And, and, and I think, you know, there's a lot of voices that say like, oh, like people are getting lazy and it's going to kill creativity. I don't think so. I mean, there's, there's, there's different people have always responded differently to innovation and you know, the,

naturally curious will explore and will do, I think. But is the ever-expanding model capability ever going to come to a halt? Well, there is a way to look at it that, you know, scaling laws as in pure pre-training, yeah, I mean, it's going to stop once, right? There's only so much energy and

and money in the world. But maybe, I'm actually curious to see, maybe that's a good thing because that will force us, both the innovation labs, PPC, OpenAI, et cetera, to do all these different things like...

latent reasoning without tokens, inference time compute, different algorithms like the ones at the DeepSeq release and at the application layer as well. Like how do we make sure that my AI gets in all these calls the right information at the right time, etc. So while there's no real reason to think that the raw capability of AIs is going to go down quickly,

It is interesting to see if there's maybe a little rest. Maybe this is me just sort of like wishing for a little peace of mind to figure out what I should do to get the most out of what is now, you know? Yeah. Well, the other side of that is things that I've heard folks talk a lot about, which I

in a way could be likened to the term forward compatibility, where you have backwards compatibility, but then you also want to be thinking about forward compatibility. And a perfect example of this is when we had Floris on the podcast, I think a month or two ago, and he was talking about how he did so many janky things in the beginning to try and extend the context window. And then three months later, he basically had

too much context window in all the models and so he no longer had to do that stuff and he also was thinking would my time have been better spent trying to optimize other things and then recognizing that the context window was going to get better and so because you don't have this crystal ball you don't really know which part of this whole system is going to get better

In the near future. So I can let time. Be the one that. Optimizes that for me. Instead of myself spending the time. And brute forcing it. Well I mean to answer the question. One should really know. What it is that you're optimizing for. Right? I think a lot at the moment. Feels.

you know, the energy that we spend on learning about new things and learning to work with this new technology. And for myself, it's genuine curiosity and sort of marveling.

But I have to admit there's also this sort of race, right? I have to keep up with the peers in the builders community and everyone is doing it. But yeah, really, we don't really know yet what it is that we're optimizing for to become the best what. I mean, it's only a few years ago we said prompt engineer. And...

Now, prompt engineering is still important, but I think it's important in a way that you are able to use the tools to write the prompts for you. Again, same as with the cursor rules. I mean, a good cursor rule setup for a large project would, in my opinion, require hundreds of rules, like different MVC files.

Now, there's ways to have the AI in Cursor, write your Cursor rules and update things that are good and make note of things that didn't work out. So then it is maybe less of a skill of writing those rules, but it is important.

And increasingly a skill of knowing how to work with these little tools to do what's optimal. Yeah. And knowing what good looks like so that if it does break the rules or if it does go off the rails, then you have that. But dude, how have you been playing with MCP and where have you found it intriguing or interesting?

absolutely mind-blowing if we want to get really out there on it um

Yeah, man. I mean, essentially, it's nothing new because we could already do it with function calling, right? And when agents were introduced to Cursor, literally the first thing that I did was to create a tools.py file and then define the tools that I wanted there and then instruct the agent to use the tools.py file. So that was kind of doing the same, but not as good.

So what makes MCPs special that it is yet another abstraction around complexity.

It is a unified manner to connect server and client. And that makes it... Well, and there's a huge open source community evolving. So if you have a need, then you can just look in one of these forums or on GitHub and you'll find an MCP quite surely. And that has been quite impactful because...

So my game is more backend than frontend. And when designing apps, there's now MCPs. For example, the one of 21stDev, Magic. I highly recommend you check that one out. What is that? 21stDev is a platform where there is a huge library of components, templates, basically pieces of code, frontend code that you could use in your project.

Nice. And the MCP tool is, you can call it by backslash UI or it recognizes your intent. But mostly with MCPs, if you really want them to use, then call them, you know, deliberately. And it will...

communicate to 21st Dev and based on your description, for example, I want a hero section with nice gradient colors and the text must stream in. Then it will search that platform, find the code snippets and then communicates those back to Cursor, injects them and then just implements them. And it works really good.

Then there's another MCP tool called Browser Tools. It works with a Chrome extension and you have to run a little server on your site

And it will give the Cursor AI, the Cursor agent, access to your browser console, to your logs, to everything. It can make print screens. So if there's an element or something that you want to change before, then you would like try and look at this element or you make a print screen and then make a mark on it and then feed it back to Cursor. And now you can sort of see what you're making in the front end when you're running it in a local host.

And that's been pretty amazing. Like for someone that, I mean, I wouldn't say I'm unable to make production ready apps for front end, but it is another way of letting people see what it is that I'm trying to envision, what I'm trying to communicate. And then, you know, you can go further on that.

And something that I highly recommend, any sort of scraper MCP. There's a hyper browser that I really like. And so when you're building, you're starting your project, often you're working with new libraries. I mean, for example, the new OpenAI Agents SDK that also already has its own MCP, by the way. Then there is documentation. And with this MCP, you can literally...

just in the cursor chat, you can tell it to go fetch the new documentation of OpenAI Agents SDK, save it to individual markdown files in a dedicated folder, and then you can have that to create your cursor rule set on that. And so it can constantly refer to the new endpoint, to the new whatever instructions that it has. So scraping front-end tools,

And yeah, cool one just for best practices, the GitHub one. I can like search GitHub, make all your pushes and that's also been, I sometimes forget it and then I really regret it in the morning, you know.

Yeah, so those, and of course, sequential thinking. But I think everyone that has worked with MCPs knows the sequential thinking ones. I'm curious to see, because I'm using one, but surely there's different ones around. And if people think what is the best and does it work for them, when do they call it? Yeah, exactly. And really, it feels like it is unlocking your workflow to help you

Like you said, get that idea that you have out from your head onto something tangible. So then you can go from there. It's not necessarily that you're looking for that final product, but you're really thinking about how can I get to something as quickly as possible. Yeah.

And what we're seeing is we have the developer or whatever your position is at one side. Then there's this huge ecosystem of options, possibilities and emerging capabilities of AI tools at the other side. And what you need is...

a router in between and zoomed out like this. A cursor is an interface of such router. It's not just an IDE. And I'm saying cursor because I work with it, but I don't want to say that's the odd. Yeah, it could be Windsurf, could be GitHub Copilot if you really want.

I think they have agent capabilities now. I mean, I would expect. Yeah. I thought I saw something on that. So, but yeah, maybe, maybe not that one, but for sure windsurf. Yeah. And, and, and, and we're going to see much more of that because just the

that just the way that recommender algorithms quote unquote helped us and having the illusion of unlimited choice while only feeding us a subset of it. Yeah. This thing is also like

a router between all these different tooling. Now, I was actually having a, I was giving a talk yesterday at a Congress of Responsible AI, which was very focused on environmental and responsible use and the environment. And I was sort of that cowboy that they got in to tell about the other side.

And I mean, I fully agree with the fact that we should take good care of our earth and all. But this was about like, how do we...

have people use AI responsible. For example, if you have a use case that requires text classification, you could use a simple BERT model. And that only uses very little resources compared to the heaviest thinking models. And I think I heard Sam Altman saying something similar as well, that he didn't like the interface of JTPT at the current time, where there's so many different model options, where you have to make these choices again and again. And yeah,

you know, the requests can be routed to the right model. But with every other abstraction that comes on top of this, that takes away choices that we make, which surely programmers are also saying about coding with AI is, you know, I want to make these decisions myself. And this is, I don't know, man, I don't know what is the best answer. But we are putting an awful lot of trust

more and more into decision making. And this is what I referred earlier to, that the paper about choice engines really nicely touches upon, well, this Sunstein is a behavioral psychologist and not necessarily a technical dude. He was on point there that we're, yeah, offloading our decision making into AIs. And we're doing that all the time and more and more.

Yeah, which it almost like atrophies that muscle of critical thinking on it, because you if you're going to trust that it can make the decision for you better than you can, or at least as good as you can, then you now no longer have to spend so much time in that decision making process. And

On one side, that's a great thing because it frees you up to do other stuff. But on the other side, when you actually have to figure out an important decision or you have to walk back and look at why was this decision made? Good luck. Yeah. And it's in, you know, I think a lot of people like non-technical people as well use AI for brainstorming, for generating ideas.

But that also is already a slippery slope because if you don't use very specific prompting, and here prompting is important, but again, you can automate that as well. Then, for example, an image generation model will

create sort of the same style images. And so your set of choice is already limited. So you're in a illusion of having choice while you're only seeing a very small subset of the possibilities. And same with, I don't know, I have seen a lot of AI generated content on LinkedIn lately. It's so bad. And LinkedIn almost encourages it with those

I think they got rid of that feature, but the replies where it would just be already generated like, great job or wow, this is awesome. Now, I think one of the, because most people are using JGPT, right? And what I think that I'm seeing is that people have a certain topic that they think is interesting.

you know, viral worthy or they just find it generally interesting or they know something about it and maybe they let your GPT write the entire post of say, um, an hundred words, uh, or they write their post and have, uh, GPT sort of edit it. Yeah.

But either way, especially GPT-4.0, but also the 4.5 research preview model as well, tend to do this really annoying thing that is yada, yada, yada, yada, and then recap it. So here, and then here's the catch. If I see that one more time. Yeah.

And it's like a rhetorical question, you know, like, so there's a, that's not just a new, is this a new era? No, it's a new blah, blah, blah. I don't know. Like, it's sort of like asking itself a question and then answering it.

And I didn't look, you know, I'm not a linguist expert, but it doesn't take much AI use to recognize this pattern. So whether it be with image generation or text generation, you see that if you're not giving examples, which is the stage before fine tuning, which I don't expect marketeers to do, right? Or whatever people outside of the technical community. And

And there's, of course, like spiral and that sort of in between, which is essentially sort of fine tuning. But it starts with giving it the input output examples that you want to see. And that requires thinking. That requires critical thinking. And.

So it's almost like there's a group of people that don't use AI at all. And there's a group of people that use AI, but they use it for too much. Yeah. And the golden ratio is somewhere in between where you do have to think like this is how it should look. And with text, I think that's sort of easy to do. I think with images and video, it's harder. And also not my game, but...

you know, you only have this mental, mental image then, right? Yeah. But also with text, it's blaringly obvious for us. And with images you can, you used to be able to really see, oh, this is AI generated. But now there's some of these models like this, uh, Reve AI, I'm not sure how you pronounce it. The realism in that model is I can't tell. And I like,

pride myself in being able to tell when it was AI generated. But coming back to the idea that you're talking about here, it reminds me of when I was just talking with Devanch on a podcast a few days ago, and he mentioned this thing called the doorman paradox. Have you heard of this? No, but I love paradoxes. So bring it on. For some reason, I knew it was going to be important for you.

Basically, a hotel said, oh, we're going to cut costs and we have sliding doors right now. So why do we need somebody to stand at the door and open the door for people? We can get rid of that job.

And they got rid of that job, but there were secondary and tertiary effects that kicked in when they no longer had somebody at the door. And I'm sure you can think of them right away. You're thinking like, oh, well, people don't get greeted as they walk into this hotel. And that's a worse experience. But then also potentially you have more people loitering outside of the entrance. So then walking into the entrance is a loss.

It's not as good of an experience. And these things were kept in check or they were done by the doorman, but it wasn't necessarily their job. This is Moravac's paradox. Oh. Now, I mean, not the specific example of the doorman, but Moravac...

AI scientist, a legendary thing back in the 70s, proposed Morfak's paradox where he said that in the future, jobs will not be entirely replaced by AI, but jobs consists out of tasks and different, I mean, that is even like an abstraction, but there's different tasks in one job and some tasks can be economically feasible jobs

replaced by AI and some cannot. And then what might happen is that if jobs get sort of like split up, then we can see the truly human capabilities specialize on the remaining parts of the job. Because, you know, whatever realistic an AI can be,

And I'm going on dangerous territory here. But one of the most important things that I have seen in the restaurant industry is, you know, when is a customer happy? And mind you, I actually developed an application for the restaurant industry and I learned in the hard way. And I was wrong, you know, because...

one of the most important things is social recognition. As the species that we are, we are constantly looking for social recognition. And we can only get that by other humans. But I don't know if this stays true, right? So there, I might be wrong here, but obviously when it comes to, say, the job of a waiter,

and part of the job is to predict what product the customer will be most happy with experience-wise, then an AI could probably do a better job. But if I am sitting in a restaurant and this really nice sommelier is saying this and that wine is great, if I really look deep into my thought process and my experience, then

I trust this entity. So let's say this woman has knowledge, I perceive to have knowledge that I don't have. So there's a game of trust. And there's also maybe I want to sort of like make her happy. You know, I want to impress her that I choose the right wine. And I don't think I have that with an AI, right? There's a lot of complex decision-making and experience there.

Yeah, exactly. And the whole reason I brought this up was because of this thought that we're outsourcing too many of our decisions sometimes. Or maybe it's not too many. Maybe it's just the right amount, but we're definitely outsourcing more and more decisions. Yeah, but we do that because we feel that we're both two things. There's an awful lot of decisions to make.

you know, the downside of ultimate individual freedom is that you have a lot of decisions to make. I mean, you can't really see, like, well, you can't, the whole thing that you see happening with, like, conservative right-wing populism and such is also, like, populism in itself is a simplification of complexity and therefore a diminishing of decision-making. And, yeah,

So there is a need for us humans to make less decisions. And there's also, it's becoming increasingly more complex to determine what is increasing my social status. And it's just very hard to grasp. And I think Yuval Harari writes beautifully about this idea

effect that AI might have and coming back to image generation, mimicking the real world, take it all the way to a virtual reality. But essentially what he says, we're at the brim of an age where people are able to create the worlds that they want to have. And

therefore one point that is always being brought up when it's when we're talking about image generation or feeder generation whatever like how realistic is it well i beg to differ it doesn't really matter even when there is watermarks whether is it's it's it's clearly ai generated if men define situations as real that are real in their consequences thomas and thomas uh sociology and and and and

if people want to believe stuff, they are believing it. And, you know, all this sort of like cognitive biases saying like, yeah, but it could be real. Yeah, yeah. I mean, this picture might not be real, but it could have easily been real. If it fits their worldview. So, point being here, like the, of course, like the creator side is faster than the response. The creator

For example, the European Union always has to respond with policy to new innovation and they're notoriously lagging behind. And that will always be the way. But also it might not even be effective at all because I don't think...

Yeah, I'm getting kind of lost in my train of thought here, but it is that when something is in abundance, it sort of loses value. So if AI generated content in whatever form is abundant, it loses value. So that is shining some light on this sort of dystopian negative view that maybe we are...

perceiving value in the new world of the source, the authority that is creating the content. And I don't know if that's going to happen, but it seems like a positive thought to think that if generation is sort of free, then something else is going to inject value into that process.

Yeah, because that becomes the commodity. And that is the piece that is like the least human part about it. And so what is going to be the valuable part about that? And where is that part going back to the doorman's paradox that we were talking about? Or as you called it, what was it? Muravec. Muravec's paradox.

You're looking at the uniquely human pieces of this. How can, how are we now extracting that? And it potentially is just that when you walk into a hotel and someone says hi to you, that is the most valuable part of it. So maybe, so like the human connection. So maybe the world's oldest profession is going to be the world's last. Yeah.

But it's funny to think about that as you're pondering it. It's like something so quote unquote simple as engaging another human is actually the most

best part of the value chain like it's the biggest thing out of all the stuff that that person does in their job or going to the the sommelier it's not necessarily the wines they recommend or getting to know you it's that you're talking to another human face to face yeah it's it's it's the the human experience and the social recognition like

The worst thing that can happen to a human, to a customer in the service industry is to be ignored relative to what they perceive other people are getting for personal attention. If you ignore someone and give a lot of attention, human quality attention to someone else, they're going to notice and they're going to hate it. Right.

So, I mean, hopefully this opens because things are going to get automated. And...

Yeah, it's like, you know, eating, watching, what is it Ray Dalio said, looking into the crystal ball is, anyone who's looking into the crystal ball is bound to eat ground glass. So don't pin me on it. But we're definitely seeing some very interesting shifts and it's going so fast. And I think in your community, people are, I think a lot of people are very open to

you know, to innovation, to change. But I think you have to realize that there is a huge part of society that responds very differently to innovation. And like, you know, like this, I don't understand it. I will ignore it. And yeah, it's interesting times for sure. ♪