We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Building Trust in AI: Chatbots, LLMs, Decentralized Finance, and JPMorgan Chase

Building Trust in AI: Chatbots, LLMs, Decentralized Finance, and JPMorgan Chase

2025/3/26
logo of podcast The Brave Technologist

The Brave Technologist

AI Deep Dive AI Chapters Transcript
People
J
James Massa
Topics
我坚信,在金融科技领域,对AI的信任至关重要。负责任的AI应该像值得信赖的员工一样,能够胜任工作并遵守规则。我们可以通过类似面试、背景调查和参考检查等方式来评估LLM的可靠性。 我专注于构建可信赖的AI,特别是应用于区块链和去中心化金融领域,以应对其中存在的信任问题。我们通过分析代码漏洞、可疑交易、异常价格变化和社交媒体情绪等多个维度来评估去中心化金融项目的信任度,并为其分配信任分数。 为了确保聊天机器人只提供经过批准的答案,我们使用检索增强生成(RAG)技术,并设置防火墙或护栏来审查输入和输出,从而最大限度地减少风险。RAG技术通过将预先批准的文档转换为向量数据库,并根据向量相似度来匹配问题和答案,确保答案的准确性和安全性。 在金融领域应用AI时,我们更关注的是避免出现假阴性结果(即遗漏),也就是保证召回率,而不是追求精确率(即每次都正确)。我们需要持续监控模型的性能,以确保其在数据变化或模型更新后仍然保持有效性。 在AI应用中,合规性至关重要,需要多方审查和严格的治理流程,尤其是在全球范围内,各国法规差异较大。我们需要尽量减少AI系统的规模和复杂性,以降低风险和合规成本。 未来,人类专家将更多地扮演AI管理者的角色,监督和协调AI系统的工作。在软件工程领域,我们可以利用多个AI代理来完成软件开发生命周期中的不同阶段的任务,从而提高效率。在聊天机器人领域,AI代理可以实现端到端的流程自动化,例如直接完成订票等任务。 加密货币和AI的未来融合,很大程度上取决于两者的监管政策的明朗化和全球一致性。组织在采用AI技术时,需要权衡创新速度和风险控制之间的关系,并根据自身情况选择合适的策略。

Deep Dive

Shownotes Transcript

Translations:
中文

From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist podcast, we're demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I'm your host, Luke Malks, VP of Business Operations at Brave Software, makers of the privacy-respecting Brave browser and search engine, now powering AI with the Brave Search API. ♪

You're listening to a new episode of The Brave Technologist, and this one features James Massa, who is the Senior Director of Engineering and Architecture at JPMorgan Chase and holds six patents covering subjects such as AI data quality, cloud cost management, multi-teacher LLM distillation, and model self-healing. He holds a master's degree from Computer Science Department of Harvard University and in finance from the City University in New York. In this episode, we discussed...

the importance of trust in AI, particularly in the context of blockchain and decentralized finance, ethical considerations in finance and challenges faced in ensuring responsible AI use. We also explore the evolving role of human experts in managing AI technologies and advice for organizations adopting AI. And now for this week's episode of The Brave Technologist. ♪

James, welcome to The Brave Technologist. How are you doing today? I'm doing very well, Luke. Thanks so much for having me. Yeah, thanks for coming on. I've been really kind of looking forward to this interview and having this discussion. Can you tell us a little bit about your background and how you kind of found your way to working where you're working now?

Certainly. So, you know, my faith in my family helped me with the basics of character and that helped me in all the usual ways. And from there, I've had more recently some success developing a responsible AI from the trust perspective. I found that what responsible AI boils down to is AI that you can trust.

So if you think about, for example, when you hire a person to do a job, you'd like to trust them to do that job. And similarly with the AI, what I would like to do is interview this LLM that I'm going to be using by asking it questions. If I like the answers, then I'll do a background check and

And if the background check goes well, that's good as well. For example, how was the LLM trained and what does it do with your data? You can ask it for references. You can see the experience of others, right? So those are all the similar kind of trust factors. Overall, the LLM is doing something that a human used to do. So you'll want the number one thing that you want from humans, which is you want to trust them. Yeah.

Interesting. No, that's a really cool perspective. What do you see as some of the early biggest benefits from AI for the industry or your industry in particular?

Well, so again, I like the AI that you can trust. And I've been working, for example, on some blockchain papers. Blockchain is like ground zero for trust issues because in blockchain, there's a lot of malfeasance that goes on. Many of us know about the Sam Bankman free challenges. And so some of the papers that I've worked on have been working towards, for example, establishing trust scores and decentralized finance projects, that sort of thing. Yeah.

Interesting. Interesting. Yeah, I think that's a really good point because a lot of noise and things kind of get lost in that noise in the blockchain space. But that is ultimately what it kind of comes down to is having this, you know, having this transparent kind of means of doing finance. It's awesome. Anything you want to go into detail around that DeFi side or the blockchain side? So, for example, one thing that we're looking at is the smart contracts.

and the decentralized finance projects around them, and trying to combine the perspectives of looking at code vulnerabilities, looking at some suspicious transactions, anomalous price changes to the smart contracts, social media scam sentiment.

That sort of thing. So if we make four LLMs to look at from those four different perspectives, and we get a better viewpoint and a better idea of whether this is a scam, that's one project that I worked on. And then I gave a trust factor to...

By the way, I worked with Kennesaw State University, so it wasn't just me. I had a couple of friends. It's always a team effort. A couple of factors, by the way, anomalous price changes, social media sentiment hadn't been looked at before. That was very helpful, and it helped us put a trust factor on a DeFi project. And that's very good. Then you can know whether you want to use this project and whether you want to take smart contracts from it. So we like that very much.

The kind of thing that we can do also from multiple perspectives is if we use LLM distillation, which may be what's going on with a deep seek, by the way. LLM distillation is this idea that you can go out and you can take a hyperscalers LLM and so open AI, things like that, Gemini, et cetera, et cetera. You take one of those LLMs and you ask it some questions and you get the results and you save the results in a training data set.

And then you take that training data set and you train your own smaller local model, which you don't have to pay for every single call to one of these hyperscalers. And it runs faster and it runs locally and it's always up. All of these are wonderful facts and it can be cheaper. So you can see some of these other LLMs that we're hearing in the news right now may be a lot cheaper working that way, not training with billions of NVIDIA dollars. Yeah.

When did the work take place on these papers? Just so the audience can get an idea of the timing around that. I published a paper. I had to go to Denmark and give the paper last summer. So in August. Yeah, on the blockchain work that you were doing. Was that recently? I went to an IE conference and I presented the paper in Copenhagen.

We're pretty involved in this space. And I think it's such a cool thing to hear that you're touching on the social element of it along with the vulnerability side too. Because, I mean, like having these smart contracts out there in this space, like when something goes wrong, it can go very wrong. And the auditing is kind of like monotonous.

marketing in a lot of cases, right? So much of this is when you have that transparency and then you have this social layer of people talking and sleuthing and all of that stuff. So looking at it from that perspective is really cool. And I think that it seems to be a perception around institutions like the J.P. Morgan that it's not as embedded or working on these types of things. So it's super cool to hear that you all have been looking at this from that lens, because I have a feeling like a lot of our audience isn't even aware of that.

So it's really interesting. And Captain Morgan is famous for the Onyx project, which is a big blockchain project. People know about it online. Very cool.

So aside from the trust and some of those benefits, what are some of the challenges that you're seeing in the industry and in some of your work? Well, challenges, again, keep coming down to the trust, I think. Beyond that, it just becomes culture change and skill set for delivery. I could tell you about sort of just a basic, very basic building blocks that we run into, for example, with chatbots.

So chat bots, as you know, now, again, a little bit trust oriented is that the chat bots represent your company as they speak. So we run into some problems if they give away something for a dollar or they give away something that's not part of your policy to give away. That can happen by accident or because of a nefarious actors either way or somebody trying to embarrass you by getting out certain results that they put in the news. All of these things are challenges with the chat bots. So we try to get

ensure that the chatbots are giving only approved type answers. And I'll just go over like how that can happen. A technology called to get approved answers, a technology called RAG, Retrieval Augmented Generation,

This is a very key technology these days. What that does is we take pre-approved documents full of information and we put these documents into something called a vector database. It's chopped up pieces of the document. So then when somebody searches for something, the search is turned into a number, if you will, called an embedding or a vector. And that's compared to the embedding or vector that's of the chopped up document.

and it finds those two things, the piece of the document, the chunk from the document, and the question, and it compares the numbers. And if the numbers are the same, if the vector numbers are the same, similar, I should say, close, they're close in this n-dimensional vector space, they call it. If they're close, then that's a hit, and it returns that answer. And it returns the exact answer

from the chunk of the exact pre-approved document. So you're not just getting random answers from a hyperscalers LLM. It's trained on the whole internet. So the answer could be drawn from anywhere on the internet. Lord knows what's there. It's much better if you have a pre-approved set of documents and the answers are going to be drawn from there. And that's how the RAG documents are honing in on that.

Additionally, is the concept of having a firewall or guardrails around the LLM. So it means that just as you have a firewall on your internet at home, and it's especially keeping things from coming in, and it may keep things from going out, right? Like probably at your company, at J.P. Morgan, for example, I cannot send personal information outside the network. It picks that up and notices that it blocks it. You can't send the client list out of the company, that sort of thing.

And it's also blocking bad things coming in. You want the same with LLM. The prompts coming in should be reviewed. And this right there, it could shut down certain prompts. We detect this prompt is a nefarious prompt. So that shut down the prompt on the way in. Then you get an answer out of your approved documents, like I said, chunked answer that's approved. And then

It will modify the answer slightly just to make it sound like it's conversational, right? Not just like bam, bam, search result. It should be a little conversational. So maybe a couple of words tweaked here or there, something could go wrong. So on the back end, you want to have firewall again, guardrail again, to review the answer, what's coming out, and ensure that nothing bad goes out. Otherwise, block it again.

And it's all happening in just this like flash of time, right? Like a very, very short. Yeah, yeah, yeah, exactly. And I mean, just to kind of drill down because the trust keeps coming up, right? Would you recommend for folks that are, you know, have teams that are getting into AI, like is trust something they should be training members of their team on? Is there, I mean, because you mentioned like checking policies and things like that, like how much of this is a technology related versus like training and teams and kind of like education? It's culture. It's culture. Yeah. Yeah.

Yeah. Because you can think about what, you know, good people, you know, good people, right? And it's the same with the LLM. The LLMs that you're dealing with, they're part of now your team culture and your team values, you know, and things like that. It goes right to the core of who you are. Are you going to let a person like this or an LLM like that work with you? Interesting. Interesting. Let's get into the ethical side, you know, especially around finance, right? Like how do you kind of look at ethical considerations or approach them when it comes to finance?

Well, so we already at JPMorgan and I believe in most institutions like us already have some core values and we already have experts in certain areas like privacy and security. And, you know, these are these are key aspects already there. So, yeah.

What I'm trying to express here, and forgive me if I'm hammering at home very heavy-handedly, is that the LLM is not something new. It seems real new because it's new technology.

But what we're asking it to do is something that humans have always, in the main, is that we're asking it to do something that humans would have done. I get that, of course, there's some things that it can do that humans could have never done. But even then, it's more along the lines of it's doing things that humans couldn't do because it would take humans too long. But if we were going to apply 1,000 humans to this job every day, how would we expect them to behave? We want the LLM to behave that way.

Got it. Got it. Okay. So like, are there any specific kind of like, I guess, ethical considerations or areas that you focus in on, on the world, in the world of finance? I mean, I've had people, for example, that work in public policy or in, in a lot of like civics and academia and things like that, that they have certain things that they are kind of like, okay, we want to make sure that we're not employment forms aren't getting, you know, false positive because of some weird parameter that said, or something like that. Like when you're looking at this from a finance perspective,

perspective, like, is there anything that, you know, you all really zone in on around ethics? We're concerned most about the false negatives. Okay. So we call that in AI terms, we call that recall. And we can use a Ruse score to find that or a Blue score to find the precision. So in AI, there's machine learning and there's LLM, but either way, you could have precision and recall from something very confusing called a confusion matrix.

You look that up. Precision is the one that says every time I give you a result, the result better be right. I don't care if you miss some number of results. Just always give me good results. Now, what's an example of that? If I'm if I am a financial advisor and I'm going to be calling clients and suggesting something to them, I always want it to be right.

I always want to suggest a product that's suitable for them and good for them and that they would like to buy. I always want to be right. I want a very good what they call precision about that. And if you don't suggest to call somebody or other and it would have been good to call them, it's OK. Just don't send me to call the wrong client and propose the wrong thing to the wrong client.

Right. That's the precision angle. Now, say on the other side is the recall angle. The recall one is I don't want to miss anything. I'm in compliance. I don't want to miss any compliance problems. It would be very bad if we were to miss a compliance problem. Yeah. Or leaving the world of finance for just momentarily. I'm a doctor. I'm looking at the X-ray. I don't want to miss the cancer.

Right, right, right. Sure, sure.

And very interestingly, how it goes is first they say, how's your recall? How do you know that you didn't miss anything? So then we'll say, listen, for the last, we've looked at the last year's worth of data or some very long period of time. And a human being is combed over it and the machine combed over it and the machine got the same or better results over this entire period of time. That's great. And the first time this happened to me, by the way, and I went through the governance and I thought, score, I'm done here, Luke. And...

But then it's actually even a higher bar than that. Then they go on and they say, but what about sustainability? I says, what's sustainability? They said, sustainability means that your machine is working for you right now, but how do you know that it's still working six months from now?

Hmm. You know, in a year from now. I said, what do you mean? Why won't it keep working? And they say, it's more like a car than, you know, the other kinds of rules-based software that you've been working with all your life, James. And I says, really? It says, yes, the data can drift on you, for example.

Or if you're using an LLM from a hyperscaler, the LLM itself can change. Even if you're using the same exact LLM with the same version and so on, they can do an update and they're tuning the parameters back there, tuning the weights, and it's changing the results that you're getting back.

Yeah, well, sure. I mean, the data updates too, right? The data gets fresh too over time, I would imagine, and that impacts things. Data can change. We're always buying another company. Here's an example of how you could visualize. So we've got all of these accounts and we train on one set of accounts. But if we were to buy another company and get their accounts, now the shape of the data has changed.

Yeah, yeah, definitely. And maybe our model will behave differently and we'll have to retrain. So it's very important that we are doing something called model monitoring as part of your MLOps and so on. So monitoring the results, we should have some KPIs that we're expecting and we should be monitoring to ensure that we're always hitting those KPIs. And if we drop below some threshold, then we want to have an alert that basically means retrain the model. Right.

ideally sell to in the model. And I would imagine too, you have to stay really tuned into, you know, navigating kind of the regulatory space and the latest on that front regionally too, right? Like how is navigating that with emerging tech? I mean, because you've got, obviously you've got like this background in crypto and the blockchain side, which has been kind of a regulatory nightmare to navigate the past, you know, X amount of time. But on the AI side too, it's really emerging as far as like people's awareness of it and hitting the

rubber hitting the road, so to speak. How is that from the compliance side? You spend a lot of time educating different parts of the company around this. From your point of view, how is it navigating through all that? Well, there's a number of different things to do. One is to go through many steps of governance and have many pairs of eyes reviewing, trained people to review anything that you put in production. Another thing to do is minimize what you put in production.

Right. So that sounds funny at first, but you can imagine like when you first get started typically with rolling out AI in your company and doing a lot of internal build and so forth, you might have everybody having at it and then you start to get duplication, for example, over time. We're 309,000 people work at J.P. Morgan. So you can imagine that. Wow.

Not everybody knows each other. Not everybody knows every project that everybody's working on, right? So one of the challenges is we set up actually its own set of governance is to have an inventory of all the models that are there so that we know what's there and start to reduce the footprint of what's there. It's very helpful if you have one document processing system

application versus five or 10, they're all doing the same thing. And therefore you could have five or 10 times as many errors. Right. Right. Right. You know, our issues. Right. Yeah. Minimize the, that minimizes your regulatory footprint five or 10 times what I just shared. Right. Totally. Totally. Yeah. Different policies for each and whatnot. Yeah. And like, and, and, you know, potential for things to leak or whatever. And then it works just like anything else. You know, you have the lines of compliance defense and so forth.

And governance before anything goes forward. There's a lot of different governance depending on what you're doing. For example, they have different determinations of whether this is called an analytical tool, whether this is an LLM or whether this LLM is sending data outside the firm.

or whether it's been running locally within J.P. Morgan, whether this LLM uses data in different regions. You know, the regional aspect is very important. Every country has its own rules. So that's incredibly critical. And it's very challenging to understand. Sometimes I find for myself the path of least resistance is to deploy the LLM only in the U.S. Got it. In the first go, right? Like, why figure out what's going on in country X, right?

and make a mistake for potentially less gain. Yeah, or something's like not really clear, right? And, you know, better to be safe until you get that clarity, so to speak. It's always better to be safe. The overall, the overarching thing is when in doubt, be safer.

Yeah, no, that's great. There's more to think of than my application that's going live, right? There's a lot more to think of than my feature, the entire JP Morgan, right? It's smart that you're kind of presenting it that way too, because there is a lot more kind of thought mindfulness you kind of have to have with these things, given the types of engagement that you have with them, right? And the different types of data that are going in and out of them and all that. So no, that's great. A lot of talk around kind of

prompt engineering and different types of like ways that humans are going to be interacting with these tools. How do you see the role kind of of human experts evolving as AI becomes more prevalent in finance and just kind of in the general work life?

The way things are going, it seems, is that humans will always be in the loop. And the humans, though, I think of them more as being managers now. So everybody's going to be a manager. No more individual contributors. Everybody's going to be a manager of a team of LLM agents is my reading of the tea leaves.

Yeah, yeah, yeah. That's a good, that's a good bet. In that vein too, I mean, you mentioned agent, like Egenic is definitely getting a lot more attention now. Are you seeing applications of Egenic AI that are really interesting in your world right now? Or is it still something people are kind of just testing and learning with? Of Gen AI? Yeah, yeah, yeah. Or Egenic, I mean, like agents being used in addition to like, yeah. Yeah.

For sure. So the way things are moving in the agents, I would say is both in the chatbot, say, and then also in software engineering, I see a lot of discussion and progress of moving towards end-to-end solutions that are connected. So take software, since this is a technology show, the brave technologists, right? So if you're a brave technologist, you may work with the SDLC, Software Development Lifecycle.

and there you're gonna have requirements and build and test and deploy and so forth and operate. So throughout this life cycle, you can have an agent for each section of the life cycle that does that bit. You can have one agent that makes the requirements, another agent that verifies that the requirements are good, another agent that turns the requirements into test cases, another agent that takes those test cases and turns them into code that passes those test cases.

and so on and so forth, right? Like at the end of that, you have one person who's managing it. And if it gets stuck anywhere along the line, generating the code, say there's a problem generating the code, the code doesn't compile or that you raise a PR and the person feels that it's got a security flaw in it. So it rejects the PR. You know, there can be like humans in there checking things and rerunning a stage, but I'm thinking of it going from step to step to step to step. It's like an assembly line. And the agents are the workers on the assembly line doing their bit

at each step of the SDLC. And the human foreman can jump in and like stop the assembly line when it goes wrong or restart it or give some advice or that sort of thing that you would imagine. That's how it can work in software engineering.

chatbot would you like to talk about chatbot yeah yeah let's go for it I mean yeah let's dig into that side of it too chatbot might work like this with chatbot by the way chatbot I would say does work like this at many of the big chatbots that we know online right yeah how are they working it's

first, there's an orchestrating LLM. So that's the first agent. It's the person who takes, just like a person, the operator. It's the operator. So that orchestrator, they take your request or your question and

And these days, like a question is, it can be sort of end to end with these agents, right? It'll say, can you get me a ticket on an airline or something? And I'll say, yes, I already know this and that about you. Would you like to leave on this time and that time? And you say, yes. And this is okay. And then the agent will go and like actually, actually do the work of getting you the ticket, charging you, sending you a bill, whatever.

giving you the information, do all those different things that might have been a few different steps previously of different programs, maybe with humans in between and so forth. The agentic architecture is this idea of, from my perspective, of going end to end and combining multiple steps that perhaps used to be individual steps with human in between.

It sounds like it could be like, you know, a huge benefit, especially on the finance side, you know, with people and everything from, you know, managing budgets to trying to kind of look at like what different options. I mean, the markets are so big now, right? Like there's so many different options out there and even these new ones, right? It's interesting with your background too in the blockchain space. I mean, you

You've seen how difficult it can be for adoption. So it seems like some of these things could be very helpful for getting additional and broader adoption and educating people along the way and helping them out with the parts of the process too, I would imagine. How are you seeing the next phase of crypto and AI converging from your point of view? It can be anecdotal or whatever. The biggest thing is just the regulations around both crypto and AI.

have been less firm. That's one of the reasons why it's so risky to turn out either one. You don't know where you'll end up later, right? And to invest in either one and spend years working on either one and put money into it, either way, right? And why we want to minimize what we're doing. I think once the rules of the road around both are becoming, become more clear, we're

then we'll be better off, right? And that would be, like we mentioned one thing about the global consistency. You know, there's some level of global consistency about basic things like, I don't know, money laundering, right? I know that there's some countries, but, you know, to a large degree, something like money laundering is well understood in similar regulations around the world or in

countries that are close to the U.S. And we want the same for our technology, which, by the way, is going to be governing our finance and so forth, right, and our money. So because of that, it's very good to have similar rules. Right now, I would say, clearly, you know, the privacy rules and so forth in Europe are a lot different. The AI rules are less clear. Some things are less friendly to business, too. Yeah, yeah.

Having any rules at all is the most friendly to business. Consistency is the most friendly to business, but also some rules are sort of there to say, we don't care if you make profits. Some rules are just kind of like...

extra things, hoops to jump through, you could call it. Sure, sure, sure. No, and I think that's a really good point about the global consistency, right? Because it's not like this is starting from zero either, right? Like the World Wide Web has been around for a long time. People have been using protocols globally, these things, you know, there's examples out there. And I think that, yeah, like it is interesting. The rules of the road are good to have. And there's a lot of commonality to from thing to thing. And yeah, that's super, super interesting.

For people, whether they're in businesses or kind of getting into the space themselves, do you give any advice for folks that are coming into the space around adopting AI technology or any advice for those people that are in organizations? I know we talked trust earlier and things like that, but any other advice you might have for folks that are kind of maybe mandated to do things with AI now and are trying to kind of find their way?

Sure. Well, one of the first things that many teams and many organizations I think are grappling with is along the lines of, is this AI transformation we're going to go through, is this something that's an emergency? And because we're going to be disrupted by other AI players and we feel that sort of level of pressure that the risk of AI is less than the risk of being disrupted by AI. Right.

All right. There's that versus another aspect would be another lens that you folks put on it is they say AI is just another technology. Before this, it was cloud. Before that, it was data strategy. This is that and the other thing. This is just yet another technology. And we have to look at the business case for using this technology. We have to do a cost benefit analysis of every single rollout. Right. And things should be completely controlled that way. And that, of course, will have a tendency to stifle some level of innovation.

Right. Versus other folks are like, we better innovate. We've got to be there fast. We need to be at the forefront. So weighing those two aspects and understanding if we're going to be innovating grass from the bottom up and let everybody on the everybody everywhere to have some level of freedom to innovate and innovate quickly.

Or is it going to be much more top-down driven? We're going to invest in a few big bets and we're going to keep things controlled and maybe make a smaller footprint to reduce risk. Maybe make a smaller footprint to reduce costs in some way. Make sure that we're not doing duplication. If you do duplication, you'll get more innovation and more things will pop faster. There'll be some level of competition even in the company. Best things will come up.

But, you know, there's some duplication of cost as well, right? Sure, sure. Duplication of risk as well. These are things that go through the organization's head in order to decide, you know, where they're going. And I think there's a continuum and people probably start someplace in the continuum and move towards the center. You know, usually in my experience,

anecdotal experience. I find some companies are starting out in the very conservative or the very aggressive, and then they move towards the middle over time.

There's definitely like, you know, around the hype cycle around this too, where it's like, okay, every, all the executives are coming in. Oh, let's get to AI stuff everywhere. And everyone's like, well, where's the fit? And so trying to kind of find that fit, right? Like, and I think that's a really, really, really good, good approach. You've been super gracious with your time today. And is there anything we didn't cover that you want to let our audience know about or want to put some light on while we got you? The next big thing is quantum. So let's look out for that.

Yeah, yeah. Okay, cool. Awesome. Awesome. Where can people find your work online or reach out to say hi? Hey, you can find me on LinkedIn. So that would be really great. James Massa. Excellent. Well, James, I really appreciate you coming on the show and sharing your insight with us. I know our audience probably learned a lot. And I'd love to have you back to sometime to kind of check back in on things and see how things are going. Oh, that'd be great, Luca. It's such a wonderful time. Thanks for having me. Awesome. Thanks.

Thanks for listening to the Brave Technologist podcast. To never miss an episode, make sure you hit follow in your podcast app. If you haven't already made the switch to the Brave browser, you can download it for free today at brave.com and start using Brave Search, which enables you to search the web privately. Brave also shields you from the ads, trackers, and other creepy stuff following you across the web.