cover of episode Ep 502: Sustainable Growth with AI: Balancing Innovation with Ethical Governance

Ep 502: Sustainable Growth with AI: Balancing Innovation with Ethical Governance

2025/4/11
logo of podcast Everyday AI Podcast – An AI and ChatGPT Podcast

Everyday AI Podcast – An AI and ChatGPT Podcast

AI Deep Dive AI Chapters Transcript
People
J
Jordan Wilson
一位经验丰富的数字策略专家和《Everyday AI》播客的主持人,专注于帮助普通人通过 AI 提升职业生涯。
R
Rajeev Kapur
Topics
Jordan Wilson: AI 的应用具有创新性和风险性,我们需要在利用 AI 技术进步的同时,关注数据安全和治理问题,避免因不当使用 AI 而造成损失。 在使用 AI 工具时,我们需要仔细阅读服务条款,了解数据在传输和使用过程中的安全性和隐私性,并关注治理问题,而不是仅仅关注技术更新和竞争优势。 我们需要在 AI 创新和伦理治理之间取得平衡,既要利用 AI 技术的优势,又要避免因忽视伦理和治理而造成的风险。 Rajeev Kapur: 企业需要建立跨职能的 AI 伦理委员会,制定 AI 模型审查机制,并将高管薪酬与伦理成果挂钩,以确保 AI 的伦理和安全使用。 AI 系统会继承和放大数据中的偏差,企业需要积极应对,通过定期进行第三方审计、检查训练集和模型、构建模型的可解释性以及与多元化社区合作等方式来减少偏差。 构建 AI 伦理委员会应该包括公司内部和外部的利益相关者,例如法律人士、科学家、伦理学家、技术专家和最终用户,以确保 AI 的开发和应用符合伦理规范。 企业应该将数据视为增长机遇,而非成本,并学习如何提炼和利用数据,同时保障数据隐私和伦理。 企业应该将数据隐私和伦理作为产品特性,而非成本,并通过提供透明的使用日志和用户控制功能来增强用户信任。 企业需要定期审查 AI 项目,并建立监控和反馈机制,以确保 AI 的安全和合规使用。 AI 治理需要全球合作,否则难以有效实施。企业需要自我监管,并积极适应监管变化。 深度伪造技术既有益处也有风险,需要进行监管。企业需要采取措施来防止深度伪造技术的滥用,例如添加水印等。 企业领导者应该以远见卓识、伦理道德和勇气来引领 AI 的发展,并保护消费者和最终用户的利益。

Deep Dive

Shownotes Transcript

Translations:
中文

This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life. Leveraging AI can kind of be like a tight wire rope back, right? Like you want to be innovative. You want to, you know, take advantage of the latest and greatest that AI and large language models have to offer yet.

at what cost? Is anyone out there reading terms and service of, you know, all these random AI tools that you and your team want to take advantage of? Do you know what happens with your data once you send it to one of these big AI tech companies? Do you care about governance or do you really just care about keeping up and getting ahead and using the latest AI update from one of the big players?

I think these are important conversations that are worth talking about, and that's exactly what we're going to be doing today on Everyday AI. What's going on, y'all? My name is Jordan Wilson, and I'm the host, and this thing, it's for you. Everyday AI is your daily live stream podcast and free daily newsletter, helping us all not just keep up with what's happening in AI, but how we can get ahead to grow our companies and our careers. So it starts here with this live stream and podcast, but

That's where you learn. But if you want to leverage it, you need to do that on our website. So if you haven't already, please go to youreverydayai.com. Sign up for the free daily newsletter. Yeah, you can go listen and watch and read hundreds of backlinks

episodes where I've interviewed some of the world's leaders on topics all over across the board. But also we're going to be recapping valuable insights and takeaways from today's conversation. So make sure you go check that out. Technically a prerecorded one. So if you're dropping in for AI news, that's going to be in the newsletter as well. All right.

Enough chitchat, y'all. I'm excited for today's conversation. So please help me welcome to the Everyday AI Show. We have Rajiv Kapoor, the president and CEO of 1105 Media. Rajiv, thank you so much for joining the Everyday AI Show. Jordan, it's my pleasure. It's an honor to be here. I'm glad I'm here and I hope people get real good value.

All right. I can't wait to talk about this. But before we kind of talk about this balancing act of innovation and governance, Rajiv, can you first tell us a little bit, what is 1105 Media? And tell us a little bit of your background as well in the AI space. Yeah. So 1105 Media, we're a B2B marketing media technology company. So I guess the best way to describe it is we're like a Politico, but only for technology B2B.

So we do everything from face-to-face events to lead gen, to newsletters, to webinars, to those kinds of things. And we cover big data. So I have a company within the 1105 umbrella. There's a company called TDWI. That's one of my companies. It's one of the largest big data analytics and AI training companies in the country. It's phenomenal. So people go to tdwi.com and check it out. Then we have another business that does cyber and physical security media.

and marketing of another business that does enterprise technology. So our customers there are like Amazon Web Services or Google Cloud or Azure or people like that. They come to us and say, hey, we want to get more developers of XYZ. Can you help us get our product out to those people? So essentially, that's where we're a middleman that basically helps connect buyers with sellers is kind of what we do there. And we cover in one of our big partners is in terms of doing some of the things we do as Microsoft. We do a lot of things in the Microsoft stack.

And we have the largest non-Microsoft event and Microsoft headquarters coming up in August. So we'll have about 700, 800 people there at Microsoft headquarters for our VS Live event all around what's happening with Microsoft. That's great. So that's the 1105 Media site. Now to answer the question of my AI world, I've actually been involved with AI for a long time. So about 11, a little over 11 years ago, actually sold

a small AI startup in the machine learning space. And we were building AI algorithms used for audio technology.

Originally, when I kind of became CEO of that company, it was kind of VC-backed. We were building chips and processors, but we quickly found that the TV guys, the phone makers, they weren't going to redesign their boards for another chip. They wanted less chips, right? So we actually took the algorithm out of the chip, built out and basically built AI algorithms where we tested sound audio quality. We called it 3D sound. Now you hear it as spatial audio.

And so as a matter of fact, like, you know, if you, if you, if you saw over the holidays here in the States, there was that Apple commercial where the, where the daughter gets a guitar and that dad sacks, he can't hear her play. And then they give him the new AirPods and he puts them in and he can hear her. That was kind of the technology that we had built that we used AI for that. So I company got sold. And then I went, took classes on AI at MIT and I got a dual AI, dual AI certification at MIT. And,

And then here at 1105, we've been kind of covering the machine learning side of AI for the last eight, nine, 10 years. The generative AI stuff is obviously very brand new over the last two years. And so we've been all over that. And then I remember the morning ChatGPT came out, I jumped out of bed. I remember looking at my phone going, oh my God, this is going to be the

greatest thing since electricity, right? To change the world. And initially I was met with some skepticism from people, but I think I've been proven out to be right. But, you know, but literally like within 24, 48 hours, I'd say, you know, I'm gonna write a book about this. I wrote a book called AI Made Simple and it was the number one best AI book on Amazon for about seven months. It would get published in May, June of 2023, I think. Yeah. And so,

Yeah. And then I had a second edition and now I'm working on a third edition and now also working on another book kind of around prompting and AI for the executive. So anyway, so that's kind of my experience with it. And, you know, I'm a techie, you know, I was an executive at Dell Computer for a long time. So I always kind of been in the tech world my whole career. Prior to that, I worked for an old computer company you may have remembered called Gateway. So yeah.

Yeah. So that's my technology and AI background. Love it. So let's maybe skip to the end here and then we can unwrap this a little bit, Rajiv. But as we look at this balancing act, companies want to take advantage of every new model update, every single shiny AI tool in the corner, everyone wants to jump into it. So how do you balance keeping up and using all of these AI models, but with

the ethics side, with the governance? How do businesses do that? You know, that's a really good question. And I think that's an area where people right now are just learning and understanding and realizing they actually have to put a little bit of effort and energy into. Part of it is I sit on a board of a kind of ethics and governance AI company called Luminova. Basically, it's like the Watchmen, who watches the Watchmen. So it's basically an AI platform that watches AI for the most part. But if you think about

you know, how to think about ethics as, you know, in the same vein as making sure you don't hamper innovation, you got to take a little bit of effort and start creating some sort, whether it's a cross-functional AI ethics board, for the lack of a better term, like how do you grow the teams to optimize for speed and scale? But then how do you use the ethics team to protect the long-term license to operate and to provide value to your customer base, right? So what would that look like? It could include

legal people, scientists, ethicists, technologists, end users of your product, kind of like a core solution. How do you then go about mandating the review of all the different AI models that you might be using? And then one thing is, I think one opportunity might be tying some compensation to the executives based on ethical outcomes and concerns, not just purely revenue and EBITDA-based type solutions. So I think that's a good way to start. Another one is,

understanding and realizing that AI as we know it, you know it, and people listening know it, it's got some biases. And the AI system, the LLM you're using, is going to inherit and amplify those biases. So unless we're fighting it, like literally every step of the way, we're doing regular third-party audits, we're looking at the training sets, the models, we're building some sort of explainability to the models.

Is there a way to partner with diverse communities to look at what's happening? So that's kind of like I think where I would start in terms of looking at how to balance that.

So, you know, like a lot of times people in companies we've worked with, they always look around and they're like, who should be in the room? Right. Like you talk about kind of this like AI ethics board or that, you know, a team of people who needs to be around that table because sometimes, you know, people are looking at IT or CISOs and like, you know, sometimes it's just, oh, C-suite or HR, marketing.

Marketing, right? Like who needs to be around that table when we talk about the ethics team or the people that need to be involved in making those ethical decisions on AI use? Look, I think you remember what happened with over Christmas with Sam and the open AI group where you got the call to go, then you came back, you know, all that stuff, right?

If you remember that, that nonprofit board, part of their charter was to kind of be that kind of ethical board makeup, right? Now they didn't went out in this whole power struggle thing. But I think ultimately the answer to the question is that if you want to do this right, you need to have, you have to look at stakeholders across more than just your company. You know, I'm not saying you have to give these people power, but you should, you should give these people the ability to voice their,

their opinions, their concerns, whoever they might be. Maybe you have one or two frontline users that rotate every six months onto this board. Maybe you might look at, is there a technologist for maybe one of your customers? That might make some sense. Like, you know, for example, if you're opening an eye, maybe one of your biggest customers is...

make it up, like some big healthcare organization, right? Then let's get the head of HR for that group to be part of it because it bylaws and all these things. And there's so much opportunity on the medical side of AI, as you know, that might make some sense. Legal scholars and whatever. So I think it's going to be a combination of three or four core people from within the company and then probably three or four people from outside the company that can

that can come together, work with the CEO, work with the team and the board, the regular board to really understand and go from there. And so that's how I would look at this if it were me. But I can imagine that not everybody is going to be like me and all of that. But it does concern me. And I think the more you hear about deep fakes and these kinds of things, I think more

And long-term, I think, you know, I talked to you before the show started. I think the long-term winners that people could really figure out how to do both very well. So I do want to get into deep fakes in a little bit here, but I think it's worth diving a little deeper onto the data side, right? Because I think, you know, speaking of Microsoft, you mentioned Microsoft earlier, you know, their CEO, Satya Nadella, a few months ago said, you know, LLMs are a commodity, right? And I think...

We've slowly come to realize over the last year or two that using large language models, generative AI isn't going to be your company's moat, right? Like in competing with whoever else you're competing with, it actually gets to your data. So how can companies really both separate themselves with their data, but also, I mean, I think that's

probably one of the most overlooked pieces in terms of guardrails and even ethics and how you use that data. So let's talk about both sides of that. So I've spoken to probably 3,000 CEOs in the last 20 to 24 months. And one of the questions I ask them before I do my talk, I say, how many of you have a good command? Not a great command, a good command of your first party data. I can count on two hands how many hands went up.

Right. Because I think what happens is, is CEOs look at data as an expense rather than an opportunity for growth. I think they see CapEx. I think they see cash going out the door. I don't think they see how they can, how they can turn this into something really valuable. So to me and I mean to you and to probably a lot of your listeners, data is the new oil.

But what's missing is the refineries that sit on top of the data to turn it into something, right? You can't do anything with just raw oil. You need the refineries to refine it into something.

The same thing goes with data. You got to understand your data. You have to have the right practice around your data. You have to look at data privacy. And then you have to understand how do you now mine this data? How do you refine this data to use it to your advantage? Quite frankly, if you can figure that out, I'll tell you, just by doing that one step, which is arguably a little bit more machine learning-ish, probably degenerative AI-ish short term, you might actually just build that mode that you didn't think you could build because no one else is doing it.

And so if you don't have a data scientist on staff, if you're not spending a little bit of CapEx money on figuring out your data issues, you're going to fall behind at some point. So take some time and effort to understand your data. So that's where I would start first. Now, in terms of the privacy side, and another thing, the problem is that if you just do this on your own and you half-ass it, it's going to be garbage in, garbage out, right? And you're going to have to really...

understand how you can make the to me the challenge is how do you make your privacy a real differentiator now some would argue apple's probably like the the golden child when it comes to privacy i like what they're doing with them granted their solutions keeps getting rolled out whatever i'd rather them roll it out push it back and launch something apple intelligence of this shitty product but to me i think if they're the gold standard and and the llm's gonna run locally on the phone and all that there's more security

then that minimizes data collection. It gives the user a bit more control and opt-out capabilities. There's potentially the ability to really have really good transparent usage logs, I guess, for the lack of a better term understanding. So the real opportunity, I think, is how can you turn this into a feature? How do you turn your privacy and your ethics into a feature of your offering as opposed to

an expense that might cost you some money. How do you turn into a feature that helps you drive everything? So let's, let's dive a little bit more into just

governance, right? I think it's this word that sometimes it's like, oh, you're safe AI bingo card, right? I need to say data privacy, and I need to say guardrails, and I need to say ethics. And as long as I say those things, people are going to nod their head and it's like, all right, we're doing AI the right way. What does it actually mean when we talk about not just governance, but when tying in

governance in an ethical way. Let's break that down a little. Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on Gen AI. Hey, this is Jordan Wilson, host of this very podcast.

Companies like Adobe, Microsoft, and NVIDIA have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you're looking for chat GPT training for thousands,

or just need help building your front-end AI strategy, you can partner with us too, just like some of the biggest companies in the world do. Go to youreverydayai.com slash partner to get in contact with our team, or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on Gen AI.

Look, I mean, I think it's, are you reviewing your major AI initiatives? Are you understanding your AI product roadmaps, partnerships? You know, are they, how are they being linked? You know, are you leading from a, are you leading kind of from this, I like to call this enlightened leader perspective, right? To where these types of thing matters, you know, you know, I mentioned earlier, you have a diverse group of folks in the room helping you understand and realize what type of AI you are deploying.

You know, and do you have real good explainability of your model, for example? You know, do you have a monitoring and feedback for your model? I think all those things are really there. You know, I think one thing is, I think two things. Number one is, do you have like, are you really, when you have your AI model, are you really doing your worst case testing?

I think there needs to be some of that. I think you need to add just like, I think Microsoft, Google will pay hackers to hack their software. You got to basically do the same kind of thing. I think those are some of the things I think that are absolutely necessary to start thinking about governance and how do you build it in and then

It's understanding and realizing it's probably never done. And then how do you keep iterating and learning from and going back and giving that feedback loop and mechanism? And I think those are all the things. And I hate to say it, but I don't know if companies, especially the LLMs out there, are going to really put a lot of effort and energy into this unless it's something that's being done on a global basis. Because I think the last thing they want to do is do something that's going to tie their arm behind their back.

in terms of innovation. Because if they do it, but then no one else is doing it. So for example, like if OpenAI says, oh yeah, we're going to do it, but then Grok says, we're not going to do it, then you're going to have issues. So anyways, but I think another thing here is, to me, it's also making sure that the consumer really understands what you're doing with the data. So I'll give you an example. So I'm a big basketball fan and I'm born and raised in LA, so I'm a Laker fan. And the Clippers,

open a brand new beautiful beautiful amazing stadium into a dome it's gorgeous like probably arguably the best stadium in the world for basketball and everything's facial recognition and i've talked to so many people that don't want to go because they don't want to pay with their face you know because they don't know what's going to happen with the data who's getting that data right so a small example even though if you got tsa pre-check you walk up you're taking your picture anyways and everything about where you're going right or global entry right so you know

the company and the government has everything about you anyways, probably, but there's just something there where I don't believe that they've done a good enough job of explaining why this is better for the consumer. So you have to be able to do that. So, you know, I think it's worth exploring a little bit more because even the concept of AI innovation, you know, I think it's changed. You're like, yes, we have listeners from, from all over the world, but the majority of our audiences is here in the U S and, you know, with the, uh, the,

the presidential transition, you know, things have changed drastically, right? The whole like AI safety Institute was essentially dismantled. You know, it seemed like we kind of had this like yellowish light, you know, for the last couple of years. And now it's like, oh, there's no stoplights. We're just going with innovation. We'll see if we break anything, see what happens, right? How can companies both, you know, keep up with the pace of AI, right? Which is

crazy to do, you know, and this is coming from someone that does it every day. But also, even when you look at the regulatory aspect, you look at the, you know, the federal, the government's involvement in how it's changing, right? And all those people sitting around the boardroom, you know, if you're bored, if your leadership is only meeting, you know, once a month, once a quarter, whatever it like, whatever it is, how can you keep up with the regulatory side? And

let alone everything else that's happening on the LLM, the tool side? Yeah. I mean, look, the cop-out answer is you probably can't, but it doesn't mean you don't try. And I really think the companies that are long-term going to thrive and be there are the ones who can figure out how to do both. And what are some of the things that they can do? Look, and by the way, I went to the White House about 10, 11 months ago before the election.

And I met with people from the Biden administration. I was on the White House grounds. I met with people from the Office of Technology and Science. You know, all the alphabet agencies were 100% into this under the Biden administration. There was a couple thousand people like literally dedicated to AI and understanding this challenging issue. Okay. I don't know what's happening now, but I think we can guess what's happening now. Okay.

But I think to do this, at the end of the day, companies are going to have to regulate themselves if they really care. And my point is that I don't think the big guys will. Because if they do, they could very well end up ampering their ability to be innovative and grow. And it could cause... So again, I'm only going to use an example. If OpenAI says, yes, we will do this, we're going to publish AI impact reports. We're going to look at smart regulations and we're going to...

I don't know. We're going to create our own bill of rights for people or shared industry standards for AIUs. We're going to do this ourselves. We're going to self-regulate. We're going to self-govern. We're going to do it ourselves. But unless Meta and Google and X slash Grok and others, half the people in Hugging Face or whoever they are, unless they also step up and say, do it, it's going to be difficult to do. And

So, look, I mean, you and I were talking earlier, like, you know, the future, it's hard. If the United States wasn't built on people who said it's too hard, then they never did it, right? It would have been hard. If that's the case, the United States wouldn't be here today if it was too hard, right? And so, you know, so how do you do this? So technologists, CEOs, founders, entrepreneurs, somebody out there, you know,

you know, they're going to figure out how to do this and, you know, they're going to figure out how to embrace this and figure out how to do both. And again, I come back down to, you know, the people are going to figure out how to build it anyways. They're going to build it better. They're going to bring ethically and smarter people.

And if you've got challenges, concerns about AI's dark side, like we talked about earlier, like deep face and these kinds of things, then do something about it and lead with a vision and a purpose that stands up and says, we're putting our foot down. Here it is. And oh, by the way, I have a feeling that the first company that really comes and does that is going to get a lot of positive buzz and feedback and they actually might see an uptick in their adoption of their opportunity and solution. There might be, but I give you that.

So I'd say we can't not talk, you know, when talking about innovation and ethics around AI, you can't not talk about deepfakes, right? Because I think there's obviously a very, you know, a very defined line in the sand between, you know, your digital twins, you know, people using it for corporate use and then just unauthorized deepfakes, right?

which are extremely easy to use now, right? Anyone with 10 minutes and a couple of dollars can make something convincing that can really fool a lot of people. What's your take on both the innovative side of kind of this digital twins and enterprise companies have been using them for a while now, but also the downside for deep fakes and deception and misinformation and disinformation. Like where do you lie on that kind of like innovation versus, ah, this is risky.

Look, I mean, you know, the internet has, was risky. And then there's good things about the internet and there's bad things about the internet. There's good things about social media and there's bad things about social media. The good news about AI is that everybody has access to it. The bad news about AI is that everybody has access to it. Right? That's just kind of the way it is. So anytime there's something good, there's going to be something bad. You know, the yin and the yang of life will always be there. I just think that companies...

who are really leading this effort need to do a much better job and i believe they have the technology you would probably know a little bit better you just came back from gtc unless something was there

I think that companies have the ability to watermark something that is a deep fake. I really worry about society when someone can take your voice or my voice for 10 or 11 seconds, put it up in 11 laps, replicate our voice, and have a say, do something that we never did. It could damage the reputation of us, me, you, the people listening. Somebody taking your daughter's face and putting it on someone's body that

That's unfortunate, right? Or something happens or you hear the stories now, right? I don't know if you heard the story about that CFO in Hong Kong. So, you know, where the employee of a finance institution in Hong Kong got a deepfake invite and went to the Zoom call and it was basically a deepfake CFO and a deepfake controller who convinced him to wire $25 million because it looked and sounded just like the CFO. And that's his boss. And he's like, okay, all right, boss, I'll do it. You know, then you hear the story of how

You know, there's a school principal in the Midwest. You know, I don't know if you heard this story, but there's a school principal in the Midwest who reprimanded one of his teachers. The teacher got angry, created a deep fake of him saying the N-word, that it wasn't him, right? But fortunately, this person had some sort of connection with the FBI, and the FBI got involved and they discovered it was a deep fake, and they were able to trace it back to the person that he had reprimanded. So, you know,

Not everybody has access to that. And then you heard the story about character AI and what happened with that poor kid, you know, which I don't want to get into because it'll make me sad. But, you know, so that's really a risk and challenge. And quite frankly, the onus of that has to come to, has to go to YouTube, has to go to meta Instagram. It's got to go to whether it's Snap or whomever they might be or, you know, X, you know, to really police these things. I mean, it's better for society and for humanity. And, you know, and again, everybody's susceptible. And because it's so personal,

To me, deepfakes could potentially be, and I might sound a little hyperbolic with this statement,

But to me, I think deepfakes could be as bad as nuclear weapons. You know, so there has to be, I think there has to be some sort of regulation around deepfakes. I mean, it's almost like creating the AI, you know, the AI agency for information tracking or whatever, right? So there has to be something at some point, but we'll see. We'll see what happens. You know, I'm hoping that

We do have a fairly influential AI person associated pretty close with the president. So hopefully he'll be able to really tackle this, I hope, and we'll see where it goes. But it is a concern and everybody should watch out for it.

All right. So, Rajiv, we covered a lot in today's conversation from how companies can make data their differentiator, how to set up ethical AI alignment, and then even a little bit on deepfakes. But as we wrap up here, what is your one most important takeaway or piece of advice for business leaders trying to walk this tightrope between AI innovation and the ethical side? I kind of mentioned it earlier, and I don't mean to come back to what I said about five, six minutes ago, but

Just because it's hard doesn't mean it shouldn't be done. And now is the time where CEOs and leaders in this space really need to lead with a set vision, ethics, and quite frankly, courage to really stand up against the norm of what's happening now and really lead from the front because that's how they're going to win. And I really believe that the company or companies that figure out how to manage this and figure it out and really put this forward

This idea of privacy and this idea of governance and really understanding and protecting the consumer, the end user, they're the ones who are going to eventually, I think, win in the future.

I think it's great advice and an extremely important conversation to have, you know, especially with all the developments and regulation and all these uncertainties we have floating around. I think today's was an important conversation to have. So, Rajiv, thank you so much for taking time out of your day to join the Everyday AI Show. We really appreciate it.

My pleasure. Thanks, buddy. And hey, as a reminder, y'all, we covered a lot. If you miss anything, don't worry. It is going to be in our newsletter. So if you haven't already, please go to youreverydayai.com. Sign up for that free daily newsletter. Thank you for tuning in. Hope to see you back tomorrow and every day for more Everyday AI. Thanks, y'all.

And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.