We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode #175- Ron Lee: The AI Dumpling Dilemma

#175- Ron Lee: The AI Dumpling Dilemma

2025/7/3
logo of podcast THD美籍华人英语访谈秀

THD美籍华人英语访谈秀

AI Deep Dive AI Chapters Transcript
People
R
Ron Lee
Topics
Ron Lee: 我认为人工智能在创意产业中既带来了机遇也带来了挑战。一方面,它能够帮助我们更快地生成内容,提高工作效率。例如,以前需要一周才能完成的文章,现在只需要几个小时。但另一方面,如果仅仅依赖人工智能,可能会导致内容质量下降,因为人工智能的输出质量取决于输入的基础。如果输入的基础很低,输出的质量也会很低。此外,人工智能还可能导致中间人的消失,因为专家可以利用人工智能创造出更高质量的内容,而普通人则难以与之竞争。我担心过度依赖人工智能会导致我们变得懒惰,削弱我们的创造力。我认为人工智能在文化理解方面存在局限性。例如,它可能无法区分不同文化的食物,如日本饺子、韩国饺子和中国饺子。这需要我们进行人工训练,才能使人工智能更好地理解文化差异。我认为我们应该投资于更大的问题,例如我们希望面对真相还是我们想听到的答案。总的来说,我认为人工智能对创意产业的影响是复杂的,我们需要在利用其优势的同时,也要注意其潜在的风险和局限性。

Deep Dive

Shownotes Transcript

Translations:
中文

One, two, three, whenever you're ready. So hey everyone, I'm Ron Lee, creative technologist in the media industry background and really absorbing AI. Welcome to the show, Ron. On a scale of one to 10, what is your general anxiety level right now? Wow, it just jumps every day.

But right now, there's so much instability that I'm at like a really anxious one. It's like, what's going to happen next week? And am I going to keep my job? And then what's going to happen to the economy of China? And then what's going to happen to the industry? So much things. Dude, we're going to dig in on this episode. I'm telling you that. Wait, wait, you said one? So one is the highest...

Exactly. Yeah, exactly. He's Canadian. I was thinking 10. He's Canadian. Is that how they do it in Canada? I don't know. I'm just making that as an excuse. Usually from the 1 to 10 scale, 10 is like the upper limit. For me, 10 is like I'm comfortable. I'm like, okay. Oh, so 1 is like the high end. Yeah. Okay. It's a Canadian thing. I have no idea if it is. Maybe it is. We'll just say that. Fresh Canadian thing. Yeah. Right. So here's my question. Can you share with us what's a guilty pleasure that you have?

A guilty pleasure. One of them is really gaming.

Oh, you're a gamer. I'm such a gamer. What's your game of choice? I'm a Blizzard guy. Oh, Blizzard guy. So like RPGs? No, Blizzard is StarCraft role-playing. What's it called? Strategy role-playing real-time. RTS. RTS, exactly. There you go. You know it. So we know who the two nerds are now. Dude, we're going to be talking most of the show. You can just sit back. We know who the two nerds are now. Yeah. We got that out of the way. But other than gaming, it's just...

It's strangely because I've been in China for a long time, but I just recently discovered Olima. What? Yeah. You just discovered Olima? Exactly. What were you using before that? I wasn't using... Because I was anal not to do ordering because of all the plastic and all the garbage that gets created. Mm-hmm.

My mind is blown right now. Is your mind blown right now? It's blown. But that's a great guilty pleasure. Like food delivery. Yeah. That was so, I would never have guessed that. No, but see, that's a healthy mindset though. Yeah, it's amazing. Because for us, we've normalized food delivery so much that we don't even think of it as a guilty pleasure when in fact it should be viewed as a guilty pleasure. Yeah. Yeah, exactly. Amazing. That was great. All right. I'm Justin. I'm Howie. This is The Honest Drink. Please give it up for Ron Lee.

I'm kind of like, I'm just meeting you now. You're fresh. Yeah, I just like to know, like, how did you get into this stuff in the first place? Oh, it's 2022 when GPT 2.5 was more public. And then somebody gave me the link and said, try it out.

And I'm like, oh my God, right away, it's able to make text. It's able to make my emails. It's able to write for me things that I always wanted to do, but never did because it took me like a whole week to write a piece. And now it took me three hours. And I'm like, well, right away, it's like a game changer. That was the first thing. Was the result in terms of quality just as good though? Well, I mean, there's always that conversation of how it's not a...

An expert is a multiplier of the user. So if you are an expert of 10, then you go from 10 to 100. But if you're a beginner, junior, you're at one, then you go from one to like maybe two or three, right? Because, or one to 10, like, because it's 10X, right? So if you start off with a low base, your content quality inputs is a low base, then your output is of low quality. When you say expert, are you saying expert at using the technology or expert like in that field or subject? That's a good question.

That was the first reaction that everybody got out of 2022, right? You feel like you could be a copywriter. You could feel like you could write a brief. You could feel like you could write a storyline or a book, right? And people were saying that. And if you actually tried to do it at the time, you would realize that the quality of the books that you created was not as good as a real writer, right? It's not to say that real writers are being challenged because they are being challenged, right? Copywriters and storyboarders are being challenged, right?

the need for them is reduced or the other way around what I was talking to a colleague the other day is more like given this and given we're able to create at volume abundance the value of creating a book is reduced because now there's like books that you could just create in the matter of an hour right so the abundance drives down the value but then it drives down the craft so story writing is

is not as a crafting skill anymore because now you actually have tools to do it for you. Yeah, that's, that's, that's a, that is an interesting tension. Like I, I just don't, I'm still a little bit skeptical about this idea of this tool, allowing everyone to do something affecting the people who are actually just really, really good at it. You know, um,

I kind of see it like, it's like you can give everyone a paintbrush and they can paint on a piece of canvas. Right. And just because everyone can do it doesn't make the real artists who are really gifted at it any less good or valuable or demanded. Right. Well, this is what I think what Ron was saying is that it amplifies a person at the

their skill level. So if a writer who is really good at writing were to use an LLM, large language model, which is what we're talking about, ChatGPT or whatever, and used it to help refine thoughts, brainstorm, outline, research, that's going to elevate the pro writer to write even quicker, better,

New ideas that maybe he didn't even think of, he or she didn't think of, to incorporate into that writing. So that's that 10x that he's talking about. Now, if someone who is a junior or just starting writing or not very good at writing, using it, that person will get 10x'd up to be a better writer than they were before. But that's not necessarily meaning that whatever the LLM came out with is going to be better than that professional writer. So it's a tool. Right now, it's a tool. At least right now, it's a tool to be...

to be used to elevate one's skill. - So I'm trying to see what you guys think about this, 'cause there's two ways that it's gonna, could materialize in the future. Obviously we're just trying to predict where the industry is going into, right?

So in one way, will there be a lack of middle people? Because the experts are going to gain everything and they're going to get all the value because their content is like 100x. So it's like, oh my God. And if you have like a very basic media content, but everybody writes basic media content, then that overwhelming volume, it just makes like, well, why would I do that? So the starting, cost of starting would take you a long time to start because it just goes like this. But then the media experts go like that.

So that would be one scenario. - Like the inequality gap would be even stretched out even wider. - Correct, right. That's one scenario that could go on. Or the other scenario that nobody really predicted out of information and online and cost of distribution going to zero, then things like journalism goes obsolete. These are things that we don't know whether some of the rules will disappear. I really hope that storytelling or story writing or book writing will still exist.

but who knows what's going to happen right once once people are able to craft great ais to write great stories which is which is i don't think it's possible because the lm fundamentally have flaws right at the moment maybe five years or today maybe but um the day that we have writers that can write

Then the other tangent that would happen is we have an art renaissance, right? Similar to how painting went into photography and then photography realized it much more better than painting. But then when photography went into digital, it exploded so that you have that still crafting. So there's still a niche that humans need to drive for that we're all going to hunt for. Kind of answering your first question about craft, what's going to be is going to be hard to say, right?

Well, think of even something like YouTube. When content creation, which go back 15, 20 years, the idea of content creation was not even in our vernacular. You make a film, right? You make a TV show. You make a documentary. That's content. But then all of a sudden...

Content creation became a camera from your cell phone, walking around vlogging, et cetera, creating content reviews, unboxing. The bar of entry just kind of shattered. The bar of entry shatters. Anybody can do it, but you still have levels to this. You still have people that are doing amazing content, but also normal Joe Schmoes doing crappy content, but they still have a following. It just democratizes it and makes it very accessible for anybody, but they're still going to be good and bad.

It's not going to say that everything's going to be, you know, flatlined. Normalized. Normalized. Right. And it goes to zero. Right. So the question here, how you guys feel when you guys browse on Netflix and YouTube these days? What's your opinion on the content that's created? That's a good question. I just feel like we've been so jaded by it and like, so, you know, already, um,

I mean, there's some really good content out there, but you gotta look for it. Do you guys notice, like, there's so many more bad films these days? I'm constantly disappointed. Growing up, I didn't feel... I don't remember ever feeling that way about films. I know. Like, you go to films, like, even if it's not your type of film, it was like, okay, it's not for me, but, you know, it's kind of... You get it. Right. Now, sometimes...

More times than not, I watch a movie and I'm like, how did this thing ever get made? Right. Like who greenlit this thing? Who funded it? Why would they even pour money into this? Like, it just seems like the outcome was like so bad. I know. And you're talking about big budget films, right? Yeah, I know.

And I don't remember ever feeling that way before. I don't know if you guys feel the same way. And then here's a question, right? Is it because that the movies are getting more and more perfect because it's easier and easier to get casting, easier to get better storyboarding and everything, that our expectation just levels up to everything? If you guys remember, the first Avenger was amazing, right? And then it built up to Avenger 3 and Avenger 4 that was really amazing, right? And then all the Marvels came out and were like,

yeah like okay well yeah i'm gonna give a metaphor sure because i'm also a big nba fan basketball fan and there's this whole issue and debate about why is the viewership going down for nba why are more people not into nba as much as they used to are they blaming it on the refs i don't think it's just the refs but but here's my metaphor sure um

So the movie business, everything from pre-production, which is script writing, to the really earliest stage, which is script writing, you have formulas that are developed now. Yes. And everyone has, what's the famous book, something, The Cat, Save the Cat, right? And there's a lot of formulaic books, The Hero's Journey, etc.,

stories were, if you rewind it 20, 30 years plus, the formulas were not shared. It was, you either kind of get it and you wrote it and you were talented and you wrote a story and you kind of maybe as a screenwriter, you're just like, yeah, I'm a good screenwriter so I kind of get it or

Or maybe I study some historical films and I get a little pattern and I write my film. But it's not like now where anybody can read a book. Anybody can watch a YouTube video from some quote-unquote expert to teach you the formulas of storytelling to capture the audience, the psychology of the viewer. That's going to make films become formulaic. And when things become formulaic, your brain is already, you kind of,

know what's going to happen. You kind of know what's... It's like, yeah, I've seen that already. It's just lame. Now, I'm relating it to basketball is because they broke down basketball to formulas of how to make sure you score the most in a game through three-pointers, through reducing the mid-shot, through...

Either training athletes to become optimal athletes. It's basically gaming the game. Like the analytics. Right? The analytics. Moneyball. Yeah, Moneyball, where everything becomes stale because everyone's chasing the same thing. So back in the day, coaches who came up with real innovative strategies would stand out because, damn, that's crazy.

it's innovative when the pick and roll first came out. Whoa, what's that? Right. Yes. But now everyone's doing the pick and roll. Everyone's shooting the three ball. And each team had like their own style. Yeah. So you got to see these different styles pit up against each other. You're like the bad boy Pistons. Yeah. Right. Yes. And so, but now every team plays the same way. So it's like, there's no variety anymore. Exactly. So that's the issue that I'm seeing is because we are living in the information age where

where this information is so readily shared between everybody and all these people are breaking down anything in the world into the most, like into its minutia of how to optimize where you become more

sort of sterile and trite and nothing special. And that's where the frustration comes in from going back to why you feel the movies you watch, or you're like, how did this get Grin Liz? Because you have standards. You grew up in an era where you were able to watch films that you were interested in. When the first Back to the Future came out, you're like, damn, I've never seen anything like that. It's awesome. Whatever. But wouldn't these huge studios that are pouring millions and millions of dollars into funding these movies...

Their executives have standards, right? Like you're putting out a product. You're funny. It's a business. What standards? They want to make money. Yeah, they make money. But like if you lower your standards, you're not going to get the best return because you're not going to have as good of a product. And maybe this is the problem. Maybe they are lowering their standards.

I think it's like this laziness. So on one hand, yes, you can be formulaic. Sure. Right. And everything, there's a formula to everything. Right. And there will always be a formula to everything. But I think formula is one thing, but you can still put a lot of passion and inject your own creativity and your own spin on that formula. Right.

What I think when I see these big, big blockbuster or big budget movies, I think are just like, I'm questioning how it even got made. I just think it's just like just pure laziness, like the laziness of putting it together, the writing. It just feels like it was just like slapped together and they rested on their laurels that, oh, because we put these big name actors in it or because it's this under this name of a franchise or.

it will just sell, right? Regardless of what we do. So we can just kind of slap something together and to kind of take it back, Ron, to like maybe the whole technology thing, I guess maybe my concern with even people who are at the higher end of their craft using, let's say AI, you know, going forward, will it just make us lazier, right? And will it make even people who are actually talented more just like resting on their laurels and not,

having to try as hard, not having to really figure things out and just kind of being more dependent on this technology. And it kind of almost dullens their own ability to do their craft. Sure. So you're touching upon a few points. I first want to tackle the formulae thing.

because we all have to understand that they are pressured by numbers. And when you are pressured by numbers and given we're such an information age, all the business people are making decisions based on data. And that's how the C-level is like, okay, I'd rather go here and I'd rather go there.

So I want to bring up the topic of risk-taking, right? There's been more risk-taking in the past when we were young because the filmmaking did not have all the data. We did not have all the engagement. We don't have all the Netflix, whatever. And now that we have all of it, then everybody goes into, well, I'm going to put X money in it and I'm just going to do that. And you have to appreciate some box houses like Disney and Bob Iger because he says things like,

When James Cameron brings you something and then he says that it's going to take him three years and it's going to cost $10 million, well, you give him three years and $10 million. You don't wait.

But when this other new director, and I think he was talking about the Black Panther, and he was new, and then the cast was very risky, and then the whole story was very risky because also all African cast, then how are you going to do that? Well, it has to reduce the risk by reducing the cost, reducing the time, reducing the budget. So you do all the safe play, right? So you could see how this...

There's that question about risk-taking. And I really want to hope that if we were bringing it back to technology and then you're talking about the things about normalization, right? It is right. That is true. Now, everybody can make a film. We all have the good education. You don't have to go to the big film studio or the big...

art class, whatever university to learn the right way. Now you can learn on your own house, right? And anybody in Thailand or India or wherever can actually do a great movie. So Netflix just does open up on good directors from these countries that never had the opportunity to be in Hollywood, right? That's one thing. But the craft that we wanna have is how do we bring out risk takers? How do we help people to take chances

Because the natural reaction that we all had our LLMs is like, wow.

We're being challenged as a human. This thing is going to be so much more faster than me. And now it challenges my creativity and challenges my intelligence, right? And then it comes back to your question that you said just now, like my anxiety level. Yes, I'm at a very high anxiety level and it bounces every now and then. And I debate with my other colleagues all the time because we always get into different levels of depression because this came out as like, oh my God, now that just killed six months of my work into one prompt.

And then not only one prompt, it's not a pipeline anymore. It's a zero prompting. You just do zero shot. You just go at it with no context and just gets it. And like, oh my God, what did we just do for the past six months? Right? And then there's a question about risks, right? So how do we want people to take risks? Right? How do we expect that? Because I really liked the anecdote that Jensen Huang said the other day about a month ago. He said that if we look back two years ago till today,

A lot of the things that were invented two years ago, like the GPT-3 and 3.5 are pretty much absolute today, right? All the rags, all the information, all the memory. There was a lot of things like the memory, the text box would be certain characters. Now it could be very long. The memory has a certain length and now can be very long. And then,

If you look at today and if you forward yourself two years from today, whatever you do today will be obsolete from two years from now. - Yes. - And this is a question. Do you want to be part of the journey of going from here to two years from today? Or you just want to wait and then just take it easy. And those that are being part of the journey will be the risk takers, right? And then they will want to gamble. I want to find an edge. I want to find something that's important, right?

And for me, if I want to go there today, I know that the LLM today are very bad at understanding culture, right? Because if you think about it, how LLM was designed is just one person. So how do you expect it to know Thailand? Actually, this is a very typical question to you today. We all eat dumplings, right? We all know the difference between a Japanese dumpling, a Korean dumpling, and a Chinese dumpling, right? The LLM doesn't know any of that.

And then the dumplings always came out very Korean. And then they had that very specific shape that I want to have from the gyoza and they just didn't have it. And that was prompting and prompting and they couldn't do it. And then the Chinese dumplings, a little different, right? The Shanghainese one, they have like the little heart U shape, right? How are you going to get that? You see, these differences, the system doesn't know that. And then that's why you have people like us that have to train it.

Now, maybe one day there'll be a big dictionary of dumplings and maybe one day somebody is going to have the LLM study that. Because I'm not as informed about all this stuff as you guys are. Is it because what you said, it's just one person, each LLM is this one person training it? No, sorry. It's each LLM. It takes the knowledge of the world and it brings it into one person. You see? And then is a question about,

The other day, I was trying to prompt, give me an image of a Chinese Laopan. And then the Chinese Laopan, his haircut and his style always came out a bit Japanese, a bit Korean. And we all know the difference, right? Why is that though? Because it normalizes the information to what is an Asian Laopan. It doesn't really know the difference between Chinese, Korean, and Japanese. It's the nuance. I think it's little nuances that right now,

I could be wrong, but from training data, right? So they're all trained on data. And apparently they've already hit the max in terms of what's existing in this world today of data that LLMs can be trained on. So that's why they're now moving into synthetic data that's created by AI. Now, when one reads or as if you were an LLM gets trained on data, they cannot...

get the nuances that we are able to as humans through experience, through using our five senses, through history, through all this stuff that makes us human. They don't have that yet. And I think the key word is yet because this is one of those things that if you were to let an LLM talk about something that they were trained on that's a little bit less culturally specific, if we're going to stay on culture, that was more about, let's say,

I don't know, how to build a rocket, right? The technology behind or the engineering behind building a rocket, that's pretty black and white. And they are able to understand that and spit that back out to you and explain it and whatnot, right? It's very clear. You don't need culture behind that. You just need to read it. Yeah, it's so easy.

Of course. Exactly. We'll do that next week. And so we can talk about like narrow AI where they're just trained specifically on something to become an expert on one particular craft or sector or field. But once they become AGI, which is general intelligence, artificial general intelligence, where naturally they're supposed to be able to

surpass human level intelligence be able to use senses yeah multimodal yes and that's when they would they should be able to tell the difference between a japanese and a chinese dumpling or or like the nuance between a japanese and chinese uh i don't know like outfit or something yes

Unless they're specifically trained on it, and they'll be very good at differentiating. That's why he said that he had to train it to make it understand the differences between the three countries. Because it was not specifically trained on it. It was just generally trained on data that was existing out in the world. So is it reasonable to assume that the data that it was initially trained on was largely, let's say, in the example of the Asian law ban,

It was images and descriptions of Korean and Japanese Asian low-bends and not Chinese. Right. Not just because images that you could readily find available online would just be those. Were mostly skewed towards Korean or Asian or Japanese. Right. But let's just go back to the dumpling one, right? I mean...

trying to search and scrape the data on Japanese dumplings versus Korean dumplings versus Chinese dumplings and Chinese dumplings versus Northern dumplings and Southern dumplings, or just asking like, what's the difference? Or just asking like, rojiamuong, these kind of very specific dishes, right? Where would you find that? There wouldn't be enough data unless somebody creates, like you said, creates a dataset, creates a systemic dataset, and they create specifically a model for that, right?

And I want to come back to your question of it should, right, once AGI comes to that day. And actually, I want to challenge that. Will it ever do? Understand culture and taste. Because think about it. If you kind of put the whole one LLM, one AGI, the whole world into that one LLM is just one person.

So how can it say what is the preference of a Chinese dumpling versus a Japanese dumpling by being one person? I see what you're saying. I'm not following. What do you mean by being one person? Because if you know what's the difference between a Chinese dumpling, you know what the Chinese person will like in the dumpling.

And if you know what's the decision of a Japanese dumpling, you will know what they would like in the Japanese dumpling. So then that brings up the case that you have two AGIs, one Chinese and one Japanese, right? And then obviously we have three. Then if you go on the other spectrum, then you have seven billion AGIs, one for every person in this world, right? And then you could ask the seven billion AGIs, which one is a Chinese dumpling? And then you will have a more accurate answer, right?

based on the population of the world. I see what you're saying. Interesting, I never thought about it that way. Because each human is very unique, the taste, just like you may like... Justin's favorite film is Transformers, Michael Bay, the director. And so I can't watch that. Me neither. The story is great, though. Nothing he's saying is true right now, by the way. It's like Transformers, Barbie. He lives and thrives on that. And I couldn't. So if you were to ask...

Me, as an AGI, is my favorite film Transformers, or is Transformers a great film? I may come back, but no, it's not a good film. But then...

Yeah, Justin's AGI. You ask Justin's AGI, he'll be like, it's the best film ever made. You see? And then where does it stop? Then you only do... I get it. Would you build 7 billion AGIs? But here's my question. I mean, it could be that technically AGI, if you were to ask something that's more on taste or cultural nuance, if you were to ask AGI what we assume AGI could be, they should be able to answer diplomatically.

So, for example, if I were to ask as a Chinese American about the dumpling, is this authentic Chinese?

Chinese dumpling and et cetera, et cetera. Is this the best dumpling of all dumplings? AGI technically should be able to be, well, considering your background as a Chinese American, you probably would like this type of dumpling more than others based off of who you are. So they should be able to cater to the person to answer as if that person's taste and culture and nuance. Right. I think. So I do want to bring up two topics.

And hopefully I'm not going to lose you too much, Justin. No, go ahead. Okay, but... Lose him, it's fine. I'm already almost lost right now, but I'm trying to keep up. Go take a shit or something. It's all my life right now. Try to bring you up to speed there. But the whole AGI, and let's just take about 10 years ago, coding...

Or traditional coding is very deterministic, right? If you would run the code 10 times, it will give you the exact same answer. But AGI or LLMs is probabilistic. If you run it 10 times, it will give you 10 different answers. And this is very similar to how humans behave.

because based on the day or based on the time or based on whatever, if you ask me what's my favorite drink, right? Because we're talking about culture and taste. This is where AGI excels because you would never be able to write a code of how to guess your Justin's flavor, how to guess Howie's flavor, right? Or your best movies, you see? You would never be able to get that. And that's why Netflix came out in a good time because technology came and then they can really capitalize on that and make money out of it.

Okay. So understanding that between deterministic and probabilistic kind of thinking concept. Now, your question about diplomacy is then that's another topic about bias, right? What is the diplomatic way of answering it from an Asian point of view versus from an American point of view? And I want to bring up the next topic, which is the perplexity. You guys use that tool, right? I love that tool because I love the CEO. And then he said it very well.

When you ask a question on the platform, do you want to give it the answer that you want to have? Or do you want to give it the answer that it should be true? Right? Because on the topic of culture and taste, there's no truth.

So what's the answer that it should give? Right? And then that's why I feel that there are topics and there are issues that are so difficult that even I'm pretty sure two to five years, we're not going to solve it. But if we're 10% there, those are big risk-taking jumps that people should take that if they have an edge of having 20% there, they will have an edge on something and they would understand the world. They would understand the LLM, right?

Is the goal, like for something like AGI, is the goal to try to remove as much bias as possible? Because the way I'm thinking about it, right, because I liked how you kind of mentioned bias in this, in that we all have our own biases that are developed through our upbringing, our exposure, our culture, our interactions, everything, right? Sure. And so the way I see it is our individual biases as humans, right,

A large part of it is probably dictated by our limited exposure to certain things. Yes. Right? Yes. We can only live and see through the eyes of ourselves, and therefore we are informed by that experience, and our biases are shaped by that experience. But with something like AI, especially something like AGI, wouldn't

Wouldn't that not be the case? Because the whole point would be to train that thing on as wide of a data set as possible. Therefore, breaking the boundaries that limit us humans where I cannot live in Howie's shoes and he cannot live in my shoes. I cannot live in someone that's living in Afghanistan. I cannot live through his shoes or her shoes. But with enough information and data being trained, they could approximate that, right? And so wouldn't that...

be removing bias to some extent. Try to make me understand what does removing bias mean?

Because I have a hard time to put that, because my mind kind of thinks in zeros and ones, 'cause I'm a technical kind of guy. So this is the cultural nuances that I don't get. And two, my mind revolves around what generates value or what generates revenue. And in some cases,

The biases, and we could talk about in social media, they are there to generate engagement. So it is a value-making creation for the social platform because you're more engaged. So I don't see, or we could dive into why would we want to remove bias, right? So are you saying that bias drives value?

for sure in social media. It drives arguments. It drives polarity. And then that's what some platform has says very clearly. We're more polarized today than 10 years ago because of social media, because of information. And this is a second order nature reaction of social media that nobody ever predicted that we hope that we were more connected,

But yet, in fact, we're more disconnected because we're more polarized in the ideas that we have in our echo chambers, right? I mean, by all means, internet and social media has created more good than evil overall, right? It's a net positive, I would say, right? But these are things that the algorithm amplifies, right? And it values out of it.

And that's why I'm hoping with AI or AGI, I mean, it's kind of hard to talk about bias and talk about that direction. But if we're talking about value creation or where do we want it to go and where I hope some of the new young kids would want to spend their time investing and taking risks on is on these bigger questions around, like I mentioned just now, do you want to be faced with the answer that you would like to have? Or would you want to be faced with the answer that

would make you grow more knowledge or more curious or understand more, right? - I think faced with that question, I think publicly everyone would say, "Oh, well, of course I want the truth," whatever that truth is, right? Meanwhile, people might,

or secretly or whatever, or subconsciously really just want to gravitate more towards what they want the answer to be. Hence you have echo chambers and all these things and self-fulfilling kind of algorithms that, you know, just feed you what you want to hear. I mean, this is well studied, right? Yes. Um,

But I think on its face, most people would say that they want, no, I want it to tell me the actual fact and truth. And I think that's why I was kind of maybe mentioning the bias thing, because I think from a user point of view, I'm guessing like part of the concern, or maybe just part of my personal concern, maybe this could be my personal bias, is that part of the hurdle for me to completely trust using, you know,

AI, especially as it develops, is that how do I know that it's not deeply flawed and biased in a certain way and just feeding me certain information, right? And I feel like even for the dumpling example, like choosing a specific type of dumpling over another type of dumpling to show you that is a form of bias, right? So

If we can remove the bias, then maybe it can be more diplomatic. Like what Howie was saying was like, okay, well, what kind of dumpling would you want me to show you? You know, there's many different types of dumplings. And then you can be like, oh, well, show me a Chinese dumpling or, you know, whatever your prompt might be.

Instead of just automatically just putting something in your face without even having that process, right? Yeah. In terms of biases, they are existing already depending on which LLM you use. Yes. Right? If you use DeepSeek, if you use Gemini, well, Gemini 2.5 is a little bit better now, but previously, back in the day, like last year, it was quite biased. DeepSeek is biased. Yes. Coming from China. Yeah.

OpenAI also has their own chat GPT, has been flawed. They all have biases. And I guess the general conversation is you have a big group of people saying, don't put your agendas in the AI. Yes. Let AI be as non-biased as possible. And...

These companies do tweak it, some of them, to try to make it less biased. Right. At least on surface level, right? The goal, I think, is to be, at least on a surface level, for the common people like us to understand, is to be an unbiased, all-knowing,

Artificial intelligence. Yeah, yeah. On the surface level. I mean, who knows? Maybe I'm being too idealistic about it because I just always assume, again, I'm not as informed as you guys are about this at all, but I just assumed maybe too idealistically that the whole goal was to make something that is less flawed and better than we are as humans.

Oh, that's a big goal. Yeah, it's supposed to. It's supposed to. I think on paper, that's like the idealistic goal. Whether or not that's true or not, I don't know. I don't think anybody would know. But that's my standard for it. Sure. I don't think Open AI or Anthropic are trying to make something less flawed than human, by the way. But there is a topic around

how to work against it and how to protect it. And that's why there's a debate around closed source and open source, right? So platforms like Gemini and OpenAI and actually those are- - Topic. - Yeah. I mean, let's say DeepSeek, right? Because they have the .com version, they have the open source version. The .com version is obviously geared on whatever their nationalities and their presence are trying to go value at. So you cannot ask certain questions on that topic.

But...

they're very well open, the open source one. And you could disable all the knobs and you can let it say whatever you want to say, you see? And then that's the beauty of it, that the tension needs to be there because then somebody else can create another one, another deep-seek version that's more, say, Sri Lankan or that's more Malaysian, you see? And then you need these things because those countries don't have the money to have, to create these big LM models

but then they are offered a big one and they can tweak it to their own preferences. So in a way, if I want to come up with something, the word bias for me is tweaking. That's how I rephrase it. So how can you retweak it? And what's the right and wrong tweaking? There isn't. Because then there wasn't right or wrong bias. There isn't. And to say that we want to make it less flawed than human, that is an interesting topic.

because I hope that there will be companies that will go and try to develop these LLM in hopes to understanding how humans are. Because if you go back five years ago or 10 years ago, the LLMs were created to understand neurology and they were created to understand the mind, the human mind. And when they first created the first neural network, it was to copy how the brain works. And then everybody thought it was flawed.

until data was pushed and until GPUs were pushed and they realized, oh, it was not flawed, it was good. It would just need bigger scale. And then that's why we was talking about like, oh, we need hyperscalers. We need the big servers to create this big AGI, whatever, ASI, whatever. And then we need to get there. When we get there, hopefully it will be, like you said, towards understanding something that's less flawed like human.

But sadly, it will be most likely going towards making a machine that will couple up more value and more economy and create more GDP for it. Yeah. That's what I was alluding to when I kept saying on a surface level for common people like us to understand, it's supposed to be unbiased, it's supposed to be open, and et cetera, et cetera.

But we're not in control of these AIs. We're not creating them. We have no idea what's going on behind the scenes. And when it does get to the level of, let's say, an AGI where that power is so powerful that I cannot even comprehend how a human could use such power, especially a common person on the street.

No way. No way. No way. They're going to put fucking parameters on that. It's no fucking way. Because it's such a powerful tool. Yes. No way. It's like someone picking up Thor's hammer and being like. That everyone's walking around with Thor's hammer on the street. You think the people in power are going to let that happen? They're like, what am I going to do? No.

Honestly, right? Because right now the barrier is you can't pick it up, right? Unless you are the chosen one. Yeah. But if you eliminated that barrier and everyone could just pick up Thor's hammer, we have a big problem. Not to be a doomer, but honestly, think about it. Right now, the LLM, the multimodal ones that create images, that create videos, all this stuff, they're not at the level of perfection. They're not at the level of, oh my God, I could just sit back and just let it go. Nowhere close. None of them. Yeah.

But if it does get to that level and every single human being was able to use it to their whim, what kind of chaos is that going to be? Is that where government regulation steps in? Because that's the only power that can really... Government? I mean, who knows? Government ain't making AI. Private companies are making AI. Yeah, private companies, but you're going to have to... There has to be a regulator. Otherwise, a private company is just going to run wild. Look what's happening in America. Yeah.

I mean, okay, let's... So do you think it's going to go the way of social media then eventually? 100% sure. 100%. Because social media wants content. There's not enough creators in the world to make enough content to feed everybody to just be on this all day long. There's not enough. So the same problems or similar problems that we see with social media in terms of its effect on society, even our own health and children's health, you see that

You see AI kind of following that same path or falling into that same predicament? So here's a sad point. The Europe is very far ahead in regulation. Sadly, all the big LLM makers will now be in Europe. Okay. Obviously they're on the US because there's barely zero regulation for them. And there's so much capital for them. Now, what's very interesting, because if you guys know for the past two years, all the LLMs in China were heavily regulated. But then when DC came out, well,

nothing happened. Nobody made a comment. Nobody restricted anything. And they just said, well, within China, if you use it here on the .com, it behaves this way. But outside of China,

Whatever. And they have no comments, which is amazing, right? So this is something interesting that's happening in the dynamics of the whole regulation world, right? But putting that aside, how will AGI or how will AI or LLM evolve is heavily dependent on users and how money shifts to it.

And that's why it's interesting that the whole open source is there to challenge the closed source, right? If it was up to open AI, most likely they're going to go, they're going to charge more. Most likely they want to have a marketplace. Most likely they want to have an OS. Most likely they want to be on everybody's houses so that you can control everything in your houses. Create the whole ecosystem. Most likely they want to do that because that's what Microsoft does, right? And then they're fully invested in it. Now,

It's not to say that they're evil, no, because they want to generate revenue, but it's to say that other ones who have open source have the capability to at least challenge them. So that when they have a feature, then open source can have it and they give it for free. Let's just stop for one second. Are you caught up with open source versus closed source? Yeah, yeah, yeah. Well, on a very surface level, but yeah. Okay, cool. I didn't want you to...

Like swimming in open source, closed source lingo and you have no idea what we're talking about. Don't single me out here, man. Do you like Transformers? That's your favorite movie. We're all friends here. Wait, so then in your opinion...

Because I know a lot was made of the whole deep seek thing once that dropped. Yes. How big of a deal do you think that was, that moment? In terms of this whole open source versus closed source and the ramifications of that and having a challenger. Yeah, we could definitely talk about that because some time has passed already since the first deep seek R1 came out. Yes.

A lot of stuff has happened. Let's talk about that a little bit. And it's opened up the eyes towards, I said, there was always something to me before, people that want to take risks, right? Because before they're like, oh, I have to have the best GP. I have to have the best system. I have to have the best team to do it. But they took a completely different route. And-

And in a way, all the embargoes or all the blocking that America has done in China towards blocking the chipset, towards the VC money and all that, has not slowed them down. And in fact, it has created a competition, which is in a way good for the whole world, right? Short term is good for US, but long term is good for the world because you create competition, right? And I'm hoping that more competition will come out of it. And-

And in a way, if we're going to go towards, let's roll back to a topic that maybe is more concrete. Let's say, talk about creation of content, of short films and of logos and of images. Okay. And then let's bring it out to that level and say, let's say today with one prompt, I could make the perfect logo for your company. And I could make the perfect brand video for your company. And not only that, I could make a million of them. Would you be happy?

As the buyer, as the customer? Yeah. Yes. And that's a challenge because me and Howie both know that is a no. Because if you go in the meeting and then the client's like, oh, I want to pay X dollars. And the second meeting without even giving a pitch is like, here, here it is. 100M and you love it. It's perfect. And I've tested it. The client will be like,

For always. No, I don't like it. I want this. I want that color. I want to tweak this. What was your thinking behind it? You see, there's a journey towards making a brand a brand. And there's a journey towards making a film a film. Can you imagine that, oh, I go on Netflix and I want a Chick Flicks Transformer movie. And it's going to give me the best amazing movie right away. And then you're going to comment it. And then you're going to do the next, I want another version. And then you're going to do another version again. And then what?

You see, the human nature, you will never accept something that's right there. - Unless they don't know that that was your first try or whatever try, right? Like unless they think like, oh, this took a lot. - Unless there's a story behind it. - Yes. - There's a narrative of how it was done. - Yes. - Right? - Yes. - You see? - And they were able to witness that journey, right? Kind of, so to speak. - In a way, it's funny. And I'm kind of quarreling with some of the episode, the one with Anna about the observation and the quantum physics. And it's very funny.

When I was listening to that, because it's funny how if you observe the journey of the creation of the movie, and if you observe the journey of creation of the logo, you will appreciate it. You're more bought into it. But if you don't observe it, then it could be whatever. Yeah.

And then you're like, well, then why? Then you question it. You see? And it's a funny thing. So that's why I feel that coming back to AGI and whatever, and then to even fathom that the system can give you a question and answer right away and the answer is golden, humans will reject it.

No matter what. Other than the fact that it's deterministic, if it's probabilistic, topic about culture and taste, for sure they're going to reject it. Because they're like, why would you say that the Chinese dumpling is better than Korean dumpling? Why? You see, there has to be a story. So that's why I find that

If we have a system that will create AGI, but then in return helps you understand human behavior, then yes, I would hope that we would create an LLM and a company that wants to have the human version that's less flawed. But sadly, Howie and I both know that we're all driven by revenue and money. Yeah. At the end of the day, right? Are we, am I too naive to think, oh, this time money is not going to be like... We hope so. The thing, you know? But it will always be the thing, right? It will always be the thing. Well, yeah, I...

I don't know if this is, or if we're staying in line with this or not, but, um, you have a lot of people out there, uh, that constantly speak about AI and its development. And I, for example, I'll just throw one person that I like to listen to is Mogadat. And, uh,

One of the things that these certain sector people, they say about AI's future in a very positive way is abundance. Oh, abundance. AI is going to bring abundance. Things are going to come down in cost to near zero, right?

food, production, life, anything you want, you can get. This utopia. Abundance. You have no more worries. Wasn't there like a similar narrative when like the internet was first like put on mass scale? People would think, oh, it's like utopia. Everyone's going to be like,

But anyway, this is one of the positive, I would say, dream scenarios that some of these people paint for the listeners to dream along with. The problem is that they're not taking into account human nature and the people who are using these tools or the people that are developing these tools. Now, I'm not saying Moe is like that because he's actually become a doomer now, but the

That's a conversation that I feel is so important that it's humans that are flawed.

We are all flawed. I don't trust us. Yeah, no, that was my whole premise of like, go buy a thing. Exactly. And so when you say something like open source, because I'm trying to link this back to DeepSeek and its revelation and how it affected the market. The idea of open source, hopefully, is that you're going to have as many good people that are using it and I guess using it for good to combat the bad people because closed source is

they may be bad people. You know what I mean? I'm behind the scenes. Yeah. This is a group, a small group of people behind closed doors doing something. Yeah. Controlling. Yeah. Right. Just like the, the, the people behind Facebook when they started creating the like button, even though they were, I guess, so innocent that they didn't know that it was going to affect the people, uh, the common person, but you don't know these big companies, how, what their conversations are now open source. Hopefully you, it puts it in hands of, uh,

you know, altruistic people that are thinking on how to make it as good as possible to benefit the society, to benefit the world.

That's the democratization of deep seek and flipped the strategies of, let's say, a open AI to be more cautious and be like, okay, well, yeah, we're going to come up with our own open source model soon. And we're going to, you know, try to create a different lingo or conversation with the consumers for good or not.

Because Deep Sea came out. So that's the, you know, what I'm trying to get at here. I want to ask something because I want to, I want to ask Ron in terms of, I want to ask Ron in terms of what Howie just said. Okay. Because he gets very worked up about, thank you. He gets very worked up about like, you know, it's going to be such a powerful tool and it's going to control the world. And it's like this unlimited unbridled power. No doubt. So, okay. So that's what I want to ask you. Are you on the same page with him? A hundred percent. Okay.

So you don't think he's being hyperbolic when he says that? No. And I really want to bring up a topic that I've been trying to figure out for the past two, three months. I really think we're missing the ball here on what AGI does, right? Okay. And think about it this way. And I'll give some anecdotal so that at least the viewers can kind of understand the train of thought. So AGI.

Web and mobile was created, and there was a lot of second and third order effects that nobody ever predicted, right? So web was created, and then it was information, and then you said information was going to zero. But we never thought that journalism would go to zero. Okay, but sadly it does. But they really need it, right? But out of that, communities was created. But

Mobile came. Mobile came feeds, and then mobile came algorithm, which also wasn't a good thing. But then the control was not in the creator anymore. The controls was in the algorithm because you were not purposely clicking your like to a following of a influencer, celebrity, or something, a content creator. You're at the mercy of the feeds, right? So then obviously, did people shut down the feeds? They could have, but they didn't.

And obviously then the algorithm trying to get more of the power, which was a second and third border effect that nobody ever predicted. Right. And then it's, and then this way it is right now. Right. And then we kind of are saddened by how the content creators and the celebrity makers to have less power into dictating their future because they don't have a direct communication with the fans. Right. Because if the platform can remove your content, they can. Right.

But before, like how YouTube was designed, they were there because they want to connect the fans with the creators because you will go online and just see what's the next feed that's coming out. Now, coming back to AI on this similar topic, we're at the day in 2002 when Facebook just got created, right? And there was like very small audience. And there's a very small audience right now. And the reason why I say that is because

Even though we all think AGI is great and AI is great, the tool adoption within the office is very low. I would say 10% of the people are using it more than 50% of the time, less than 10%. And it should be much higher. So the reason why I think that we're missing the point is because

We're creating a lot of things that replicate mobile, that replicate web, that replicate search, right? Maybe we want to replicate sea trip or travel or assistance, whatever. But you guys all know that the whole swipes, the whole like Uber economy was the one with the whole house thing over there.

How are you booking your hotels? Airbnb. Airbnb, yeah. That whole economy, if you went in 2005 and say Airbnb and Uber, like, are you crazy? I'm going to go in a stranger's car and they're going to drive me to where I'm supposed to be? But now it's ubiquitous, right? Alama, obviously, is ubiquitous, right? So there are economies that was created

And if you think about it, if all of a sudden all the cell phone towers shut down, those economies go to zero. They have zero revenue. So they're hugely dependent on the mobile and internet business. So there will be economies that will be completely created because of LLMs that we cannot think about. And the reason why I say that we're missing the ball is because we're nitpicking on topics like biases or content or stories that

But we're not finding what we're supposed to find, which is something new, which is on the tangent of new experiences. Or like I said, we're in the art creation renaissance, right? If you guys agree with me. Because like you guys just said just now, all the movies are kind of perfect. They're kind of boring. If everything was perfect, then everything is boring. And then now we're all looking for that new thing.

And we're all saying, and that's right. I really hope that you and I, Harvey, we're going to find the creatives that want to take AI and that want to push it to a new level of things that will create new content. Because if you know, film evolution has gone from black and white to voice, to color, to animation, to CGI, to 3D. And the whole avatar was amazing because it was done underwater, the way that the camera was filmed and 4K and so on and so forth.

But there's going to be another form of media entertainment. I think we all want that because we all want to get rid of our phones. Was it you that went on your phone? You talked about that was Howie. Oh, yeah, exactly. Right. We all want to get rid. A number of us wants to get rid of our phones.

I'm sure 50% of the people in the world will gladly just dump out their phone and not use it. Just because it's just so addictive. Because of the feed, like I said just now, right? The fan community engagement, that was good. But the feed is kind of abusive. And if we could get rid of that, but then what?

We want to have an AGI personal assistant. Yes, we all want that. That does all my thinking and my planning for me, right? And makes all my decisions for me, yes, so that I can focus on what I want to focus most importantly, which is spending time with my friends, spending time with my family, spending time with my kids, right?

Teaching them how to see the world, right? So I'm hoping to see like on the whole open source, hopefully there will be some open source of new tools that will come out that will challenge the closed source obviously, but that will offer a way of living that will allow us to spend less time on our phones.

that will allow us to spend more time with our friends and family, and that will allow us to spend less money, which is not going to happen because the big companies will never allow that. But that will allow us to spend more money on something else. And this is what the companies will allow us to do. If you say to the big fashion luxury companies, oh, I'm not going to advertise on social, but I'm going to advertise on A, that's going to give me brand equity, and that's going to give me sales.

they'll be more than happy to try it. They'll be more than happy to go for it. And that thing, which is never, nobody is really thinking about it. I want to try to think about it. I don't know if he has any thoughts about that. I think the first thought, and I love what you're saying too, because I am also in that group that just get rid of my phone. I would, I'm totally in that group and me and Howie and on this podcast, we've spent hours and hours of time or recordings talking about that kind of stuff. But I,

But I feel like the paradox with, I think what you said is that wouldn't the interface of whatever that new thing is with AI, whatever that new medium is, well, not just replace it, but wouldn't that interface exist on our phones? Because it would need to be on some sort of device, right? I'm just thinking practically, and that device would be our phones. So whether it's an app or something, another thing that I don't even know, like,

Wouldn't it just be in our pocket on our phones? And if that's the case, wouldn't that prevent us from being able to get rid of our phones and putting that phone down and get away from that dependency, I guess? Sure. You have to... It's going to get a bit more technical now. You have to go back into...

inputs and output, right? So how do we put information to our phones? It's either through text, through voice, right? And somehow the ALM is able to give us some thinking. And this is something new that we're kind of exploring, the thinking paradigm. Because before in the past, it was just words. Like you just put a blue link and I have to process all by myself.

Now we're trying to offload the thinking to an LLM, right? And if you're on with me, if we design our own LLMs to be my own assistants, you have Howie's assistant, you have Justin's assistant. So you have the way that you like it and you tell it what you want it to be so that it knows you better, yeah? Then in a way it can maybe start to think like you, hopefully make decision for you, like you, hopefully, right? And again, on the topic of like, should they make this decision of what you should be doing?

Or should they make the decision about what you want to do, which is the question about indulgence the other day, right? Yeah. I think, I mean, we're kind of speaking a little bit abstractly now. Yeah, we are. But...

I mean, I'll just real quickly kind of answer your question from my opinion, Justin. It could be something where if an LLM... We're only using LLM right now because it's currently what the main interface that we're working with in terms of AI are LLMs, whether it's an app or on the web. But...

It could simply, it could be where if an LLM got to a level where it was that assistant that is all-knowing and we trust it and it knows us, all that stuff, it could be one of those people that, it could be a button on your shirt. You don't need to look at it. You just be like, and it's right when you wake up, it's like, uh, uh,

I don't know, a voice will come out and be like, your order of McDonald's of a sausage McMuffin will be at your door at 8.30 because we know you like that. You know? And like, yeah, perfect, great. That's it, right? And all of a sudden it's like, oh, you got like 50 WeChat messages, but you probably don't care about most of them. But one of them you should take a look at is Justin wants you to watch Transformers 5 with him.

with them you know so it'll be like something like that where you're not interfacing with your phone you're interfacing with something that it could not maybe it could literally be without your phone it could be a device which could be being made right now by sam altman and johnny ive who knows right that's looking at your phone for you basically yeah well yeah it replaces your phone well meta and whatever some other brands are looking into glasses to replace your phone

Augmented reality. Yeah, forms of that or other wearable devices. And what I just said, which is like a pin, like humane pin, which totally failed, but it's just too early for that, I think. It is. But it could be something in the future. It could be something because you're not going to need to look at anything. Because if it is at that level that we're kind of predicting, then you don't need to interface with anything. That thing is smart. It's another you.

It's interfacing for you, so you don't need to think about it. You have meetings set up. Oh, by the way, you have a meeting. Don't forget, this is what you need to prepare for. If you really want to dig in, go to your laptop or go to something else to debrief. I already sent you the materials. It's an assistant that knows everything about you that you could ever imagine. I get that. Going back to this idea of human nature and what's really...

profit-driven corporations and the whole way our economy coming back to reality is that yes for sure like i can see that and there are people working on that i'm sure and that's from like in my view like the productivity angle how do we make you a more productive human being i get it yes and there's a market for that for sure yeah

But then people are also going to exploit the human nature side is we also know you want your own little guilty pleasures. Yes. Because there's a reason why...

doom scrolling on TikTok or YouTube, whatever it is, is so addictive is that partially we are also responsible and accountable for our own actions. And it's not only the corporations, it's not only the social media companies who are creating this stuff. Obviously they are accountable as well, but we are, we are a factor in that equation because we are doing it. We are physically the ones doing it. And to a small part, because we kind of want to do it, right? Let's be honest. Right.

even though it's not good for us. And so, so there's going to be a whole industry, arguably bigger than the productive, like, you know, being productive industry, exploiting that fact. And so this idea of a pin and this assistant is nice, but then what about all the other shit that's going to be coming at us, trying to fight for our attention and our engagement and giving us those dopamine hits and,

No, no, no. What do you guys think? I don't know. I feel like we,

If you guys want to, we can keep talking abstractly. I think it's a little bit off the rails right now. Because none of us are experts in this industry. We're not engineers. We're not product developers. I mean, I think I would like to pivot a little bit and talk a little bit more of what we're actually dealing with on a day-to-day, especially with Ron. I mean, he's the VP of growth for MediaMonks, which is a great, well-known agency doing cutting-edge stuff.

And they deal with technology all the time. And if anything, they've been industry leading in some ways.

particular campaigns. Thank you. Right. And so that I would love to kind of pick his brain a little bit and let him share some of his, uh, experiences and ideas about that, because that's concrete. That's something that we can actually put our, our, our, our, our fingers on right now. Okay. Let's talk a little bit about that. Let's do it. Let's do it. Thank you. Yeah. So let's hear it. And I want to try to finish it to which with family, because that's a dying topic. But, uh, yes. Um,

Two years ago when GPD came out, monks declared or said publicly that we will be full-on AI. Oh, really? I didn't know that. 100%. Oh, by the way, let me just, I forgot, you guys changed to monks. You're not media monks anymore. Exactly. Okay, sorry. So what do you mean by we will be full-on AI? What does that mean? Because the first thing that LLM challenges right away is creativity. Okay. Right? And it's ironic because it's just words and images.

But then it challenges a lot of the things that humans nature love to see, which is creative content, text, images, right? And to fall up to the point that like social media drives on content. We need more content to engage people, right?

And because of that, we've realized quickly that some content will quickly go to zero. Cost of content, cost of production, right? And we are historically always been a digital content production agency, right? And if we all know that digital content production costs will go to zero, then what's the value of it?

And then there's a lot of questions around like, does the time and material way of quoting like the consultants, like you charge by the hour, is it going to be obsolete? Should you charge by assets and then charge per cost per assets? How he knows it goes from like anywhere between $100,000 to $10,000 to $100 and soon it's going to become dollars and cents, right?

So how can we fight that? And how can we overcome that? And then it comes back to the topic of, I mentioned at the beginning of the show, of culture and taste. Because brands don't buy content. They buy branding. They buy whatever elevates their branding equity.

And the brand equity comes back to what you said just now is like, I need to be shown the journey of why you create this story and what was the narrative behind it. And then does that relate to us as a brand DNA? And does that relate the values to our consumers? Right. And then connecting that is something that an LLM will never do. Right. Well, never, sorry. In its current form. In its current form. Yes. Because it doesn't understand taste. It doesn't, it,

Or resonance, right? Yeah. It kind of does, but maybe not to the level that we can appreciate yet. Maybe it's not really understanding. It's just mimicking taste. It's mimicking what it thinks it is. It's not truly understanding. What do you think, Ron? Well, it's more about like if you take Nike, Adidas, and all the other big sports brands like Salomon,

And if you ask it to create a content, it will not know what's the difference. Right? And these are very, very small, minute nuances. Oh, that's a great example. Yeah. Well, hence the dumpling example. Yes. Right. I mean, dumpling is more generic. That was more concrete to the brand. Yeah. And the brand are very specific. Like my pants are different from your pants because of these elements. Right? And they have a very specific messaging because of the price point and the celebrities that I endorse and the content that we create. Right?

And this is something that LLM has yet to crack, although we are trying to crack. But you see, these kind of questions is what the brand wants to have. They want to have like, well, can we create, instead of creating a Howie AGI and a Justin AGI, we want to create a brand AGI that understand my brand. But then the question is like, well, what are you going to do with it?

Are you going to do mind control of the audience? Can you create the duplicate of the world and control them? Of course not, right? But then can you create a brand that recognizes us? And a lot of the companies are like, well, no, I have all this team behind me that already knows my brand and my brand content. So then what's the point? And then that's what we're trying to challenge now is like, we're trying to figure out what's the point of doing all of this. Because

Thankfully, brands are really into putting the human into the motion, right? So a lot of brands, especially the beauty brands, they want to have human beauty, right? Not AGI beauty, right? And that's been a big topic underneath that hasn't surfaced in the PR. But hopefully the world will appreciate that brands really challenge these questions, right?

And then if you're faced with a brand that says, well, we don't care. We just want all the AGI models that you want. Then most likely the world will react like, oh, why are all the models AGI and fake?

Because today is very obvious for us because we see it every day. We see photography and we see AGI. I can spot it very quickly. Maybe for the newer person, they cannot spot it, but after a while, they will figure it out. Or a comment will come out and somebody will confirm it. And then because of that comment, it's going to have negative impact on the brand in the long term, right? So there's a lot of big topics that's coming out. And then we are fully embracing it. Some agencies aren't.

But for us, we're fully embracing it because we want to be open to the topic of what's the next stage of media? What's the next stage of publicity? What's the next stage of content, right? And then we're creating a lot of content. We're creating a lot of tools in a way that we try to predict, right? And it comes back to the topic before that if you know how to predict the perfect movie, and we actually kind of do, right?

But then human nature is like, once everything is perfect, then nothing is perfect. Then you kind of like, this is all bad. Well, perfect loses all meaning at that point then. Yes, exactly. And then that's why AGI will never be able to catch up. Unless the day that you could replicate the whole world, every human, and you could simulate it. But that would take like, well, I mean, today would take too much money to do that. Maybe in five years from today, it will be another story.

But this is what I enjoy about MediaMonks. We ask these questions internally. We ask how should we reskill our team? We ask how we should hire the new team members that are coming in.

We challenge the new juniors and the interns on how they're thinking and how they're using the tools. Because if you want to be on board with the monks, you have to be on board with this journey, right? And if you don't approve or you don't like how AGI is, then obviously it's not your cup of tea, right? So is naturally the demographic of...

team members you have at Monks, is it skewing younger because like they're more maybe nimble with this and more in tune with this new technologies? - It's hard to say because of the 10X thing. And that's the biggest challenge that we're having.

When we hire an intern, we expect it to be excellent and expert at the AGI tools, right? But then my 10 year coder that takes it, oh my God, he does things that the junior would never even think about, right? How he plugs all the APIs together, how it creates a structure and how bam in, and we're doing these things that we've never done before. But recently we've been doing a lot of five day sprints, right?

And it's called Hackathon, another startup. But for us in the big corporates, it's very rare to do five-day prints. And then now we're doing it very often. Wait, what is a five-day sprint? A five-day sprint is basically you would do a typical photo shoot or a physical video production, which typically would take usually four to six weeks. We want to do all of that in five days. Oh, you mean from pre-production all the way to post? Correct. All using AI? Yeah.

whatever you want. - Okay, it just has to be done in five days. - Exactly, that's the only restriction. - Oh, interesting. - Obviously you have to use AI, right? So that's not a requirement. You can do whatever you want, right? But there are things that we created that if you were to do the traditional way, it would take eight weeks. If you do it our way, it takes five days, but it requires more people, right?

because it actually, it requires three times more people than the traditional way because you actually had a lot of time, you had a lot of storyboarding, you had a lot of thinking and the same person can do multiple roles. But now that you have five days, they have to do a lot of things in parallel. And plus the other challenge thing that I've been trying to figure out and I've been trying to promote, but it never happened. But I really want to have people that have multi-disciplines. You have to be a storyboarder, a sketcher and a storyteller at the same time. Jack of all trades. Which doesn't exist.

But I've been preaching that for like the past 15 years. It doesn't exist in the creative industry? I would think it would, no? What do you think, Howard? No, I think it exists. I think that type of jack-of-all-trades becoming more and more valuable to people has been a conversation that's been gaining more traction in the last 10 years, I would say. Five to 10 years. Because the only reason I say that is because I used to always relate to that. Because I used to call myself a jack-of-all-trades kind of person.

but master of none. And I hated myself for that. But I think that now that the conversation is becoming a little bit more about jack of all trades, I almost feel like I'm not jack of all trades enough anymore. And so I get it in terms of the market that we're in, the jack of all trades, the value that they provide. If you want to find somebody that knows a lot of things or a lot of platforms, a lot of languages, you can find that easily.

Or are they good at it? That's the question. That's the difficulty. Yes. Right? And I think that I have a conversation all the time with friends back in the States about these are older directors that are losing jobs to younger directors that win the bid because they say, I will do the compositing myself. I will do the edit myself. Yeah.

I'll even shoot it myself. Yes. And that way you save cost and I'll just do it all. And they're young, they're right out of school, they've learned all the tools and they're good at it. Yes. And so how does one who's older, who's a little bit more of a traditionalist,

compete against that. And that's a conversation we have all the time. And so that goes back to you, Ron. Yes, I agree with you. You need to find these days, if you're looking for young people and to all the young people listening right now, become a jack of all trades. Don't limit yourself to one

you know, avenue of, of information. We live in an information age. This is the time to just get yourself educated on many things and get good at many things. Yeah. Use that time. That's probably a trend happening, not just in the creative field, but just like across the board. Right. I hope to see that. Yeah. So,

So traditionally, I've tried to hire coders and they used to be front-end, DevOps, back-end, and then DevOps and back-end is like a lot of different kind of layers of differentiation and front-end as well. There's different levels. But what we always try to hire is a front-end with some creative aspiration. So it knows flash, it knows design, it knows animation. But the great work that I've seen were those front-enders that go deep into photography.

And wow, their prompts is amazing. The way that they write their prompts, the way that they're trying to get the light, the emotion, the eyes, the color, the direction. And it's that level of precision and it's that level of artistic level that when you see it, you're like, wow, right away, I love it. And it feels like a zero prompting.

for him. But if you look at it, it's like, wow, that took, it's really a chapter. He wrote a novel in that prompt, right?

you see a zero shot or zero prompt means one time yeah you say something and it comes out and it's what you want yeah so that's what that's the thing that uh it kind of dictates the intelligence of the of the llm you're working with if you have to keep reiterating and and like modifying for them to get it yes uh that's one shot two shot three shots et cetera yeah just like when i try to tell you something i have to say it so many times and repeat it so many times

So I totally understand, right? I can't just tell you something and you get it. I got to keep reiterating. I get it. I get it. I love that. Touche. So these are like people that we have found, by the way, and I scrape them on LinkedIn. And by the way, if you post your portfolio and I look at it, I will hit them because I talk to them and, hey, what's up? How'd you do it?

And within five minutes, I will know if you actually did it or not. Right? And just a fact of question, like how did you do it and give me your journey? And then the way that they talk about it, you kind of know, right? So that's the whole point about the Jack of all trades. Like you were very, and then when I talked to them, like, how'd you get into that? Oh, I started my career as a film photographer in a small studio, but then I got into flash during that era. So I did that and I got into coding, but now I'm back into film because nobody wants to pay for my code, which is because they were, they were,

doing shitty code. But now if you're a coder and you know film photography and you know how to prompt, wow, the quality that you get has such value, right? Just because of that. So that's why I really want to amplify the fact of that those people that have these kind of different kind of interests and they always feel like, oh, what am I going to do? And so it's a very big question, but hopefully now, to your point, hopefully we're going to amplify multidisciplinary engagement

people thinking. And it's really something that I like about this show is because we need to amplify curiosity.

and on the human flaw thing. The way to combat human flaw is by being curious of why you're weak, by curious about what you want to develop, by having that curiosity that I want to go into this area, but I don't want to invest five years. I just want to invest five months or five minutes or whatever, five hours. I want to go into it. If I love it, I'm going to do another five hour. And that incremental gain, right? I mean, compounding, I love finance.

Compounding is such a huge power, right? And I think, Howie, you agree that the five minutes you spend every day after a year compounds to nothing, but maybe after five years, oh my God, you have something interesting. I do want to reach out to these people. And if they're willing to gamble and they're willing to go into it with us, reach out to me or Howie, and then we're going to do it, right? And other disciplines, yes, you should, right?

I fucking love that. I love what you just said. I love what you said just a little bit earlier. I like when you said it, like I just smiled from ear to ear. It's like, like fighting, like how do you fight like human flaws? Like through curiosity. I love that. Yeah. Because I think, I mean, it's not an end all be all and fix all, but it's like, as long as you stay curious, you're kind of battling this idea of human flaw because we're kind of flawed in the sense that we're

When we don't get curious, we just stick to our ways and- And we get lazy. Yeah, we get lazy. I didn't tackle that, you said, just now, but I really want to fight laziness. Yeah, yeah. Despite like, okay, with the jack of all trades thing and being multidisciplinary and having a lot of experience to draw on, does it all kind of, at the end of the day, right now, filter down? Is like the new competitive battleground-

just who is better at prompting? Does it kind of boil down to that? Like who can have the better prompts or the most effective prompts? - Yes and no. Yes and no, because there are platforms that will pre-make your prompt for you, right? So it's as simple as I upload two images, like a glasses and a model, and I'm gonna put it together. And I put another image of the brand and zip it up.

And he's pushed all the three together and he's put an image of the model with the glasses on brand. I'm like, oh my God. But it's probably going to be subpar to do it that way, right? Just to do it what it's providing you. So is the real value of as a human person who can really understand the

what the mission is and what the goal is and what the intended outcome is. And, and they have a way to create these prompts to really pull that out. Is that, was that where like a lot of the skillset value is right now? So it brings back to a topic that, um, uh, the, the amazing founders of LLM started off is that right now LLMs and AI are not there to create end to end, uh,

actions. They're just a single task in the whole journey of what you said just now. So they can put all the three images together, but then what was the point? What was the idea? You have to have an end-to-end, right? So whether you're good at this task and this task, great. So that's why those jobs in the past, like we have visual designers, you have storyboarders, you actually have people just doing that.

those jobs will obviously go obsolete one day. And I want to call it out. So if you want to be that, please don't. Because there's no value, right? And that's why you should relearn ALMS because you should know where the values are and aren't, right? So where the humans are,

One scenario that somebody, I'm not too sure where, is like, and it's not going to happen, but like, will we all become managers or orchestrators of AGIs, right? We will just orchestrate a lot of team members. And that just challenges the fact that everybody is a manager, which is not true. And everybody is a leader, which everybody wants to be, but it's difficult to lead a group of people into a direction, right? Yeah.

because you have to have a lot of motivation, you have to have a mission, and there's a lot of things you have to have. So that vision of having humans managing a lot of AGIs would be highly unlikely. Now, the fact that somebody said there's going to be a CEO, there's going to be a lot of middle manager AGIs, and there's going to be a lot of people just doing the work, that's a possibility.

Because middle managers are, in a way, a drain in the P&L, sadly to say, and I'm one of them. But I want to admit that because sometimes in a day, I just feel like we're not there to argue what to do. We're just there to argue who's going to do it. And there's just a lot of time spent on that.

And then wouldn't it just be better to just have a lot of AGI, just battle it out, and then figure out how to best allocate your resources? - Isn't the way a possible future of working could be is if I was a startup owner of a company, a leader,

I talk to my AGI, ASI, whatever, this all-knowing entity of artificial intelligence. And that artificial intelligence will then work with the hive of narrow AI to make everything happen. So basically, you work as a CEO in a way, and you speak with your second-in-command.

And that second in command will make everything happen for you. You don't need to worry about the nitty gritty and the details. So even the prompt engineering idea, which is a role right now, a prompt engineer, it's a, it's a job which was created through artificial intelligence. Yes. Uh, uh,

It may become irrelevant because the AGI doesn't need to be prompted, like engineered. You just tell it. Like, look, I got this new drink. I got to sell it. What are we going to do? Create a plan and make it happen. I don't even need to tell you. You should know.

What to do. Right. Make it happen. Right. So technically that's what's supposed to be that, that, that level of intelligence that where the companies are aiming to get to. And then once that one AGI that you're talking to, then disseminates into the hive of narrow AGI, which they're experts in certain fields of marketing of design of whatever, and they all make it happen. And then that AGI is supposed to quality control and all that stuff. And then finally present,

Here it is. Should we go? Should we do it? Are you ready? And so in terms of a company that you just said a few years ago said we are fully integrated into AI, how do you run a company when you know that eventually it's going to be that one person, that one AGI is going to – that's inevitable apparently right now.

So that's what I'm curious about. - Yes and no. It is inevitable. It's like almost like- - It is inevitable, but it's not, right? Because who knows really? - Well, inevitable in what kind of timeframe, right? I think the timeframe matters. - Well, the other inevitability is that there will be a one man show that will overtake Google. There will be a one man show that will overtake Amazon, right? Would that be a good thing?

Who knows? I mean, if we're going to have millions of Amazon and then what? Right. What does that even mean? Because then everybody is their own self solopreneur. And then we're all going to be so disconnected because everybody is going to do their own thing. Yeah. But even that is, it's,

Because you are so ingrained in keeping up with technology and because that's your role, you need to know what's going on. The constant innovations just keep coming out and the constant upgrades. And you said that anxiety level is pretty high. How do you do your job every day? That's all I'm trying to like, what is going on right now in your company? That's a good question. Okay. And then sticking to the now, which is really annoying because

Some of the tools that we released six months to eight months ago are already obsolete through AI Studio Gemini 2.5. Gemini 2.0 wasn't, but then 2.5 is like, wow. Which just came out recently. Just came out recently. And I'm like, wow, that came out. And I'm like... It's impressive. It is really good. Yeah. Good quality. Yeah, very impressive. Right. But then, yes, to your point, I know that...

six months from today is going to be a normal thing, right? So it comes back to what are we trying to tackle? And the questions that I keep on asking my team is that I think the goals that we're setting to ourselves are too low. Because if the platforms, and I'm going to call it out now, 100% sure I know that the big platforms like Ali and Tencent will want to take over our jobs.

They want to take agency's jobs, sadly to us. Really? They want to. Why do you say that? Because it's part of the revenue that they don't have, right? In order for them to, they charge money to post ads and then the client push money to create studios to put the content in the ads.

And then for them, it's like, well, why don't you just give us the money? - Oh yeah, yeah. - Right? Because Tmall does it. Tmall and TP vendors. - For sure. It makes total sense. - Total sense. - Even the creative part, I can totally see them doing what YouTube did, like democratizing it. Just like Canva, right? Like the website, I use Canva to create like thumbnails and little things. Like they're gonna wanna do that with the creative stuff. Like all the creative stuff. - They want the revenue, right? 'Cause they control the media, they control the exposure, they control the audience.

They will eventually want to do it for sure. You know, it's funny. I'm going to pivot for one second, but it's all based off of this conversation so we can come back. It totally makes sense. I never thought of it that way. Good. But I'm going to relate it to my industry where you have Focus Media. You know Focus Media? Yeah. Okay. So Focus Media is a digital media platform for like elevator ads, you know, when you're in an elevator, taxis, you know, those videos that get played.

So I did a project recently and the client was working very closely with their main dude. I'm not going to say the name, but everyone knows who he is. And yeah, so he was basically sheen all, like brainwashing the client to be like, okay, so if you want to buy the ads, which you should because our penetration is so big, you need to do it.

At this length of time, you need to do this style. You need to do it really, it's really cheesy. That's why all the elevator ads sound the same. Yes. And he's like, it needs to be like this. And if your production agency and company don't get it, just give it to me. I'll do it for you. There you go.

And so when I heard that, I was like, ooh, he's trying to eat that cake too? He's trying to eat that pie too? Like, why even go to anybody else? Just come to us. He's trying to take some nibbles out of that. Right, because he's like, we are the experts of this. Yeah. And we have internal production team to do it for you. Yeah. Much cheaper than your traditional production company. Why wouldn't they, right? Right? And so when I heard that, I was like, damn, that's brilliant, right? But...

I can relate it to what you just said. Yes. If these conglomerates are, they own these platforms. Yes. And like they can easily say, yeah, just give it to us. Yeah. Once the means open up, there is nothing stopping them from taking everything they can take.

Once it's available to them to do it. The only thing that was ever stopping them was it just wasn't conveniently available. And once that is there, of course they're gonna do it 'cause they can make more revenue that way. - Yes, or it's not part of the revenue. So just to give you a tangent,

If it's not part of their DNA, then they won't touch it. So there are some platform partners that we know is not part of their DNA. Well, that's what they say now, right? It's a risk. But the reason that we're partners, like, okay, I could safely say that NVIDIA will never want to take over agency's work.

Right? They can because they're doing it, obviously. They have all the means. It's like, we got the processors and we got the technology. Right? Yeah. You see? Okay, but let's just take it another extreme. Nvidia and Steam and Epic Games, maybe Epic Games want to take the whole value chain.

But XGA doesn't want to because they just want to focus on their marketplace. They want to foster the developer's market. But once the competitor starts doing it, they might end up doing it because to compete, stay competitive with their competitor, even though they didn't want to do it in the first place. Correct. So dynamics happens. Whatever happens, it's going to happen. Maybe, maybe not. But there are some, to your point, there are some partners that if you spot that out, then you should watch out. It makes sense. Because it does make sense if you were to stand from...

profit margin yes right perspective yes you're looking at the books it makes sense control yes there's control man i get it well i never thought about that that's interesting so coming back to the question that i'm going to challenge you is you're not thinking big enough you're not thinking on the questions because we all know our clients very well we all know what brand wants but we're not tackling problems that were big enough that the platforms will never tackle

Right. We were trying to tackle product, model, scene, put it together with brand. We're trying to put together. We have an agenda for whatever. It works fine. Great. But I figured out actually most likely a year from today will be obsolete.

And that's why it's important to talk to people with different disciplines, because that's when it comes back to my topic before. There are some things that the LLMs are very challenged at, which is cultures, nuances, biases, taste. It's very hard to make it

Because when I give the same platform to five designers and they come back with content and I give, and then we select the content, we'll give back to the brand. Like the brief is the same, but each designer comes back with a totally different. Yes. Translation of that brief. And then the way that we say that, oh, this designer did not get it. And this designer does get it, which is, which is not the right way of humanizing it. Right. But it,

It's true. He understood the brand. He understood the product. He understood what the client wanted, which is not what the consumer wanted. He just wanted to do what the client want. But the platform wants to do what the consumer wants or the algorithm wants, right? But there is a world where

Brands don't care what the platform cares. Don't care what the user says. They just want to do what they want to do. Those are the clients that we want to help for, obviously, right? And then we want to put designers with them together, right? So there are things that I know that the platform will never do because you need the specific designer with specific skillset to do that, right?

And the platforms come to us and they ask, can we just copy your workflow? Can we just do it better? And then by all means, we give it. And we know that it's going to be basic because the designer that did it, they were just photographers. They were just visual artists. And then, oh, I'm just going to put this together. I think it feels good. But you know it. What's the difference between Hermes and LV?

That small nuance. And why is Ralph Lauren bigger than Polo Ralph Lauren?

those small nuances that their brands knows, and they hire the people that knows this, and they hire the consumers that knows this difference so that you pay Polo Ralph Lauren more, or you pay Ralph Lauren more than Polo Ralph Lauren, and they love it, right? But you see, LLMs, well, everyone want that because LLMs, they're, like I said just now, one LLM is the whole world in one person. If you want to do it, you would have multiple LLMs, 7 billion LLMs and 7 billion brands or whatever.

You see, and then that's why we know that at the moment today, these are challenges that the LLMs cannot tackle. It would take a lot of time, a lot of energy, a lot of money to create that in the LLM to train it. And therefore a human sadly is cheaper than an LLM. So now obviously the job goes to the human.

The day that the LN figures it out, then the cost goes down to zero. Then obviously the job will go to them. I don't like to speak in those words, but when we're trying to hire, when we try to groom the young creators or young technical, they want to hear things like, well, my job will be of value 10 years from today if I do this.

And these are the things as a leader, you need to try to predict to your team. You have to tell them, right? It's like, this is going to have a lower value. So don't figure it out. Just wait two months, somebody's going to figure it out. This is going to be a very important problem. Nobody's figuring it out. Or if they do figure it out, it's only going to be 80% of it. We need to have that 20%. Then we should spend time researching and figuring it out. And these are the questions that we need to drive forward.

So there's always a risk level that I want to come back. I mean, it's hard for people to gauge, I guess. The answer for me is more about trying to be more informed, right? Don't foolishly be a copywriter or a UX designer, right? Because now tools can do UX design. I want to lead this into the next question or conversation, I should say. Sure. You're talking about

Don't put all your eggs into becoming a UX designer. Don't put your eggs into becoming, you know, et cetera, et cetera, a coder or whatever. Yeah, so all the people listening here, let's talk about this a little bit. So we have a lot of young listeners that are either in college or just recently graduated, barely in the workforce for a couple of years. I mean, how should people be approaching work? Sure. And I want to present a caveat because...

how people react may or may not be right. And I want to bring up the topic. About two or three weeks ago, there was a student in a university in New York that published an app that helps you hack through interviews. - Hack through interviews? - Yeah. And the interviews are through the big corporates, the Meta, the Google, the Amazon. And they're really hard questions, technical questions.

And the app, what they did, it was able to do an interface layer on top of your screen so that if you were to share your screen, the interviewer would not see what you have, but the layer gives you all the answers to the LLM. I think I saw this on YouTube. There's a YouTube video about it, right? Yes, exactly.

And I did not like the way that the school responded, meaning that I don't agree with it because how they responded is they were good. They were, I think they threatened him or they did put him out and they would expulse him. But the fact that he created an app that was able to hack through the system and

And true, maybe he lied because through the interview, he did use another system. But yet when he was gonna go on a job to do the internship, he will be using the system anyways, because how can you code without an LM anyways? And then it's to show the beauty that he was able to create this app

that on top of that, he's able to make revenue, right? I think he's able to make a hundred thousand. These companies should hire him if anything, just like, you know, a lot of agency hire hackers who hacked into their system because they know how to hack into system. So you hire the guy, you want him on your team. So that's why I did not appreciate the response from the company that did not hire him and the response from the school that blocked him from finishing his degree because he already had that intuition of creating this thing. And then, and then this is why when, when I interview, I always go,

through the human feeling of asking you questions of what you've done. Like you said, you're a very specific human, you have a very specific mind, and based on your human experiences, this is what you are today. And I just gonna ask you questions about your human experience and then just your thought process, right? So in the corporate world,

There's a lot of positions that are quite useless, like mine, being middle manager and communicating and arguing. So, and I really want to focus on what creates value. And actually, I was just challenged about that last week and the week before. What will create value for the team, for the office, and for the business, right? And as a young audience, you're just at the team level. And

And it's hard for me when I give tasks to some of my team members and some, I mean, I guess it's a bit skewing towards the proactive people, but some, when you give them step A and B, they give you F and G and some just give you only A and B and some give you A, B and Cs, right?

I guess it's a question of passion. It's a question of, do you have enough time? It's a question of, do you care? Right? It's a lot of that. And it's sad to say, I only skew for those that have passion. So they give me F and G because it's like, wow, I didn't even think about this. And then you help me think beyond that. And if I bring it up to the board and I would hope to spell his name is like, it's not my work, it's his work.

But if they love it and they bring it to other offices, then at least that secures a lot of our business that we have within the region. The big picture. You're talking big picture right now. Yes, exactly. So bringing it down to a corporate level, if you already have a job, that will be how you will try to excel, right? I think Obama said that. He's like,

If you just say things to your boss like, you know what, I'll just take this and you don't have to worry about it in the week. I'll just fix it. Right. And that's one thing off the charts that I can remove out of my thinking and then come back in the week and expect something which is maybe C, which is good, but F and G even better. Great. And then this is how people should think. So I'm trying to think as a young worker, like back then.

Where back then it was easier for us in 2005, where we just spend more hours and they would just try to overkill the people with hours. But now you have to be more thinker. You have to be more value. It will be, how would you bring it back to the board or how would you bring it back to the client that creates more thinking, but helps them do better decision. At the end of the day, I guess I would say it's like helping people, the stakeholders to do better decision-making. Right.

And I have trouble bringing that to some of my junior team members because they're like, well, it's so abstract. What's decision making? And maybe you guys have another opinion on that. I mean, that's something I'm trying to figure out, by the way, at the moment. Yeah, no, I mean, what you're saying right now really resonates with me. And I love the whole ABC and then they give you FG thing because I use that very same saying when I used to manage a team of people. And I know some people say,

don't like to hear this, especially a lot of workers don't like to hear this because they want to feel like, oh, well, you're only paying me to do my specific job. So I'm just going to do my specific job. And you're asking me to think beyond that. You're not compensating, like all these things. But I feel like in my experience, people get so caught up

and zoomed in on just like this one task that they have to do, they forget what that task is ultimately trying to achieve in the long term. If you can do that, even if you're just like even a lower level employee, you raise your value to that team. So I want to talk about the big picture thing because it may sound a bit abstract to some people. That's why I'm thinking about how to think beyond your task and

And how helping you to understand, because they said, what creates value? As a junior person, back in the days when I was thinking, I thought what created my value was my emails with all my reports. I had like 20 people on my email tag list and I had to do it every Friday because all the C-level will see it. And I would think like, oh yeah, they're going to make decisions based on my report, right? Which is looking back was very foolish, right? Because we were making like,

I think 10,000 laptops per line per day. And my report was just trying to figure out the week of what was going on during that time. Right. And, and I think the only thing that we had to figure out is not to make the line stop. Right. A minute stopping is like $5,000. That's what it costs. And when the line stops, while everybody knows about it, you have to go at it. Right. So,

I mean, it's hard to see what's the value at that time, but the value was more like, okay, if my job is to make sure that the line never stops, then what else? Is it about predictive? Is it about risk-taking? Is it about supply chain? Is it about helping out some information that will help my manager to spot some issues?

And obviously right now, because of the tariff laws, right? So understanding how you would mitigate your risk on stocks versus supply towards what's going to happen in tariffs, try to mitigate that. Yes, I guess for 25-year-olds, it's hard to see, but just the fact that you want to think about it, I think that's already seeing a lot of things. And just having the conversation with your manager because they want to think about it because they're obviously thinking about it. Yeah.

And it's hard to have people to talk about it with. Because in the meetings, your expectation is just to deliver solutions, right? You don't have ways to talk with your peers about, oh, am I thinking about this right? Am I thinking about this wrong? You don't have those people to think with, right? You see? So that's why as a middle manager, it's really annoying because you have nobody helping you.

And as you go higher up the line, you have even less people helping you. Right? So that's really hard. And there's like a certain vulnerability aspect to even opening up and talking about these things as a manager at an even higher level, right? Because you call it out that you have a problem. That's why I think back to Howie's question, like another advice, I mean, my limited experience, but like another advice I would give to young people is like, don't be afraid to ask questions. Because I think good managers, I don't know about you, like I love when people ask me questions. I know. Yeah.

Because it meant that they were actually trying and thinking about it. It's when you're not asking questions that I get worried. It's like, I mean, like, you know, like you're not even thinking about it. You're just a foot soldier and you're just doing what I'm telling you to, but you're not asking questions. Okay, well, why are we doing this? Like, what's the ultimate goal here? Going back to that curiosity aspect. Yes. And you're making me think of something that I really want to say. And this is why I'm an optimist of AI.

Because I feel that there's a lot of jobs that just do the what and the how, but there's not enough jobs that thinks about the why.

And I was really fundamentally moved by Simon Sinek on the fundamentals of why. And he compared Dell with Apple because I used to be on the Dell camp. It's like, what? You sell Apple like this? And like, oh my God, your laptop is a 28,000 and my laptop is 8,000. The Alienware, I would have top of the line and you're 28. And it's really that why that got me thinking. And I rewatched that video so many times just to really understand why.

Because I feel that the whole AI is good for humanity because it's going to pull us away from jobs that are really so annoying. And at the end of the day, a team manager will never want to remove team members that are passionate about what they do.

And I strongly believe on a concept that came up three years ago by Stanford, that there are tasks that will never be done by AI, like cooking. We all love cooking, right? And I think it's ingrained in the human nature that we want to prep our food, cook our food, and feed our food to our kids in the way that we want it. Would we want to offload it to a robot? Actually, no. We want to offload it to a chef, right?

if i had enough money right and a chef at uh is a thing now actually these days for for for at home and services right so there are tasks that we will 100 offload like dishwashing and cleaning my wash my my washroom right but cooking my dumpling the recipe of my dumpling right yeah right i don't envision maybe for my kids where their kids maybe they won't talk about their dumpling recipe

But my wife is from Dongbei. They're really critical about their dumplings, right?

about the skin, right? And then Howie, I feel you're going to say something. No, no, I was just thinking, I don't believe that. You don't believe that? No. Really? No, I believe that maybe our generation would still have that emotional connection to cooking because of our parents, because of the way we grew up. Really? And maybe the current, but I feel like the young kids, like our kids growing up. Oh, the Ulama generation, right? The Ulama generation, we're not cooking. And they're growing up in the A.I.

world and they're going to grow up in the robotics world and I think they're going to... I feel that they're going to just be like, nah, it's just convenient. We're already, to your point, we're already being primed not to care with all the food delivery, right? And back to the very original comment that you made about your guilty pleasure. I don't buy that. Maybe. I mean, there's a lot of people I know that never cook. Even me, like

I worked as a cook for... Oh, really? Yeah. I went to culinary school. I did the whole thing. I worked in restaurants. Really? But even me now living here, I'm ordering most of the time. Yeah, exactly. And how many people... I'm not proud of it, but I am. And how many young folks have I talked to here, and you can share your numbers, that don't know how to cook? Honestly. A lot. A lot.

you know okay and so i think i don't know i that's when i when i heard that i i can get it but i feel like the person who wrote it was old you know it's not young

You think so? Okay. If you guys watch another lame movie, better than, lamer than Transformer, but Judge Dredd. You guys remember that movie? Judge Dredd. So versus Stallone? Legitimately was a great movie. Yeah. Stop it. Dude. Judge Dredd, bro? That's a cult classic. You can't mess with Judge Dredd. Classic with Transformer. Okay. Even better. Another Stallone movie. Demolition Man. Oh.

Demolition Man is the pinnacle of cinema, dude. Wesley Snipes and Demolition Man. Come on! The seashells? Come on. But in that movie... Who are you two? Demolition... You can't hate on... Honestly, you can't hate on Demolition Man. But in that movie, they had two points. All the food is 3D printed and sex is over whatever VR AR. Yeah, yeah, yeah. You think that that will be offloaded to a machine?

Well, kind of is now. There's a lot of sex toys out there that are very popular. There's AI, there's AI pornography, there's VR now. Sex dolls, all that is... I honestly believe that the young generation will be so accepting of this revolution because they grew up with it. At least a percentage of them. The fact that we can even have this feeling of

being against it or this negative feeling towards this is because we have a real life comparison to how we live. We have emotional connection. Yes, I agree. So I honestly believe that our kids, when they grow up, just like, I don't remember what study this was, but you know what? It was a book I was reading. And they were saying that Gen Alpha,

Under 13 years old, I think now, is Gen Alpha, right? Sure. And when they get to the point of maturity, 18 years old, our kids, when they get 18 and over, they're going to live life totally accepting of avatars as celebrities. Yes, I agree. You know, virtual life, the idea of value. Digital KOLs. Yeah, like the value of,

Having value in a digital world as opposed to us having value in a physical world. Yeah, yeah. That same emotional connection is going to be normal for them. Well, that literally is already happening with like all these meme coins. Yes. And that's literally digital value, right? Yes. That's the literal interpretation of it. Yes. It's all happening. Yeah. And the fact that they're going to be, they may even identify themselves as their digital avatar. Yeah.

even maybe even more than their real life look and feel, you know? So I feel that that generation is, everything's gonna be normalized for them. But the fact that we can push back on it or be, or look at it, be like, Oh, that's ridiculous. Yeah. That's ridiculous. We're just the old generation because we're like, turn down that noisy rock music. And I'm still, I'm still hip enough to think from their perspective that,

So hip. We are hip. I'm so hip. We are. We're young. You know what I mean? Yes. So I'm not one of those people that can be like, oh, that's ridiculous. Because you have to be able to look from their perspective. Okay. So here's something that's going to happen. And maybe technology will resolve that. But giving birth to kids changes your life.

And my mom, obviously, she always cooks the foods that I love. And then she told me that before birth, she never cooked in a day in her life. But after she had us, she had cooked. And then to my wife, yes, she never cooked before. But then she started cooking after. But now she stopped. But hopefully, she's going to start again. Hopefully, who knows. To the Gen Alphas, again, huge question mark. After they give birth, we don't know. We're going to know 20 years from now.

And then you realize that is such a soul shaping kind of concept to your point, every human have their own history. And what shapes that? Do you want an LN to shape that?

No. See? Fundamental. The food that connected me to my grandma, the food that connected me to my mom, I'm hoping that the food will connect my kids to their grandma and their mom, my wife, right? That's beautiful to think about, honestly. Hopefully. Yeah. So will that be ever be offloaded? I definitely think no, but yes, to your point.

Who knows what the Alpha kids will think. By the way, have you seen Adolescence? Mind blown? Have you seen Adolescence? I think... Netflix? I'm confusing it with sex education. No, no, no. I haven't seen it yet. It's a whole different thing. It's a whole different thing. I gotta watch it. We don't have to go into it, but... Just answer this right now. We can talk about it off air. Mind blown?

Up. See, it's really nice. I wasn't mind blown until the very last episode. And then I just shattered. Okay. We'll talk about that later. We'll talk about that later. Okay. I love it. But all these things that are going to be like, so like, because we have some sort of perspective at least, because we've been around long enough to see when these things didn't exist and

and now they exist. We've kind of like been witness to that whole journey, right? Just going back to this whole witnessing the journey thing. Our kids are going to grow up not having witnessed that journey and just be like, oh, this is just life. This is just normal. This is normal.

So you guys are almost entering the era of the drug for the kids, which is you give the iPad so that you can have peace and quiet while you eat your dinner. Right. Can I just share something before you expand on that? Sure. I raised my kid with, I got rid of my TV. Us too. Good. So we had no TV at home. Okay. Recently, I've changed a bit. I got a projector.

And now they're starting to watch cartoons. So this is something that, this is a new step for me. This is like the step towards technology. You know? Like, we were joking around. We're like, we're not going to introduce any technology until they get 18. Yeah, you were holding strong. You had that iron curtain for a long time. I was like, they're not going to see anything until 18. No. Yeah, right? No porn, no nothing, right? They're not going to know what the internet is until 18. Wait until they get, until the toy unpacking, right?

What did they get onto those like aimless- Toy unpacking video? Oh my God. You're not even there yet? No. There's the other, if you swipe through like whatever kids, these like very cheap animation that keeps on rolling of like these very cinematic, that's very addictive.

Wait till they get to that. And obviously, you're going to have to have an account for them so they can go in their history and they have to keep on erasing and blocking because if you don't do that, they're going to keep on feeding it. And I'm going to call it out right now and I know they are a client, but YouTube Kids sucks because it doesn't do any parental...

- Limitations or restriction? - Because it does not give content that they should be watching. It give content that they want to be watching. And I'm like, oh my God, what does a three-year-old think of what they want to watch? - Oh God, that's so scary. I'm freaking out right now. - So it's just like reinforcing whatever they watch. - I'm like, you want to watch quantum physics, quantum mechanics, gravity, mathematics, all the sciences? Oh no, of course not.

But beyond the parenting topic is the next topic that I want to talk about education, right? Because before my kid was born and I have this in my WeChat, like I was chatting with my wife and obviously she was in the room next door, but we're chatting on WeChat.

And they would kind of map out like, okay, they're going to go to this school, they're going to go to that school. Yeah. Before they were born, they're going to go to this study, they're going to go in science. And this was 2015. We're going to move to the US. And then by the age of eight and nine, they're going to have Chinese and they're going to go to English and French. They're going to have architecture and everything. And then they're going to be the perfect model kid. You're going to engineer the perfect specimen. Exactly. And we have two. So if it fails on the one, you have number two. Yeah. Right.

And in 2016, Trump happened. Holy shit, what are we going to do? 2020, COVID happened, even worse. And now you guys know what's happening. And on top of that, AI makes education and information obsolete. And I'm like, oh my God, the whole plan went out of the window. How do you plan in today's age? That's the biggest conversation I have. Not conversation. I would say heated discussion I have with my wife.

The idea that

We've had this conversation just like you planned out, like the whole architecture and we started the whole architecting from birth. Trust me, we started from birth. - Which degree, which school, everything. - Even where the birth, you know, like which ID are you gonna get? - I know, right? Where the birth? Is that even the topic? Where did they get birth? - Well, like which ID, like which country? - Oh yeah, yeah, yeah, us too, us too. - Yeah, so yeah, it went that deep.

And so the whole architecting a kid's life, you know, from birth all the way till adulthood, that was a conversation we would have. Sure. And at first I was along for the ride, you know, and it wasn't because that was back in COVID lockdown. Yes. But...

3.5, ChatGPT 3.5 moment, like you said a few years ago, it changed everything. And so right now, the discussion from my side is I don't want to hear this architecting anymore because this is irrelevant. How are you going to architect 10 years down the line? You cannot. Right.

You cannot have the same idea of, oh, you want to go to this university for them in this country? First of all, the idea of architecting a kid's life should not be done so seriously, personally. I don't think, that's just ridiculous. Not anymore. You can't architect. But even before, it's like, what the hell are you architecting? It's a human being. Sure. You can lead, be a good leader or role model, but how are you going to architect

a kid's life, right? Or you can architect in like very short segments. Yeah. Don't like architect all the way up until he's like a teenager or his college. I mean, like that's just his wife. Yeah, his wife is like his wife. Yeah, exactly. But anyway, so what I'm trying to say is now my key argument is I cannot

10 years down the line anymore. No one can. Yeah. Yeah. But parenting is a thing now these days. And I can't even plan five years down the line. I cannot. You can't plan one year down the line.

Think about it. Think how much change is in one year. I can plan one year. I can plan one year. But I can't plan five. You can plan it, but whether or not that plan can remain relevant is a different story. But I think part of planning is also your confidence level in your planning. So I can feel confident in one year. Oh God, am I going to backtrack on that line? But what I'm trying to get at is I can't make life-changing decisions for a five to 10 year pipeline.

And we kind of all agree that the U.S. is a very, at 2015, was a very attractive place to be education-wise, money-wise, VC-wise, stock exchange-wise. But today, especially with the tariffs,

the education has been challenged and the stock exchange has been challenged. Meaning that will money, and this is another topic, will money flow out of America into other stock exchange, meaning France, Germany, Singapore? Like diversify to other international markets. Correct. Because that seems to be the trend at the moment. What I do know is that education in the US should not be that expensive.

Right? I mean, like- There's a lot of things there that shouldn't be that expensive. Right? We're all saving up for it. If my kid goes to Stanford, I will probably try to figure it out because they have the most elite people in there and they're big thinkers. But education costs in Canada is so cheap. I'm like-

If it's not Stanford. Yeah. What's the value of such an institution? What's the value of a university? If AI goes towards what we've been talking about this whole fucking episode. Yeah. Right. What's that? What's the value in it? I'm really challenging the fact of like, the fact that how the New York university is reacted to that kid. I'm like, I'm like, if the kids don't go into a university that really accept these technologies, then what's the point? Yeah. I'm like,

Because you can never chase down memory. You could never chase down logic, summarization. They will do it better than you, right? But what humans are good at is thinking, right? And yes, okay, we're going to argue that's going to be the human thinking versus the AGI thinking, and it's going to be two values. To the day that are we going to have an AGI value-made mindset, which the answer is no, because there's too much culture in this world.

Because if it was for the AGI, there will just be one world and one culture. Would that very exist? Of course not. We have too much war going on to have that happen, right? But I have no answer at the moment for that question of, because we were really mindset towards going to the US education system. Second would be the UK or the Canada education system. And now it's like, just have a good life, right? The basics, right?

You hope for them to be healthy and have a good life, right? Because at the end of the day, you guys, your kids are only two. Once you hit five, but it's very heartbreaking when you see another kid that has autism or they have like some handicap. And when you see that, you like realize like, wow, all your dreams, all your plans is like,

If they're just happy and they have enough food on the table and I give enough money for the savings of their lives, they have a very middle income lifestyle. It's good enough. And then, and then it just comes down to like, okay, I don't need that Nobel prize. Yeah. Ron. Cheers. Cheers. Thank you so much for coming on the show, taking the time. Talk to us, man. Thank you, sir.

- If anyone wants to connect with you, how do they do that? - Oh, yes. First, reach out on LinkedIn, although in China it's hard to get, so I'll respond eventually.

Other than that, yeah, I need to start a podcast. I need to start a blog. Look up Monks. Yeah, do it. Look up Monks' website. Yeah, look up Monks' website. Is there contact info at Monks or something? If you search wrongly Monks, I think you'll find me because I produce enough content. But I think the most efficient is to reach out to me on LinkedIn. Oh, yeah.

Or I'll reach out to you because you have a good portfolio. There you go. Yes. Ron, once again, thank you so much for coming on and talking to us. It was such a pleasure. You, sir. And that was Ron. I'm Justin. I'm Howie. All right. Be good. Be well. Peace.