We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Beth White - The Evolution of AI in the Workplace

Beth White - The Evolution of AI in the Workplace

2025/3/20
logo of podcast HR Data Labs podcast

HR Data Labs podcast

AI Deep Dive AI Chapters Transcript
People
B
Beth White
Topics
Beth White: 我从2017年开始关注AI,亲历了AI技术从简单的‘如果-那么’逻辑到复杂的自然语言处理和机器学习的演变过程。起初,我看到许多面向消费者的网站上出现了所谓的对话式聊天机器人,但它们的准确性和实用性都非常有限。这促使我思考如何改进AI,使其更准确可靠,并最终建立用户信任。我发现,人类在AI训练中的作用至关重要,只有通过人类的参与和监督,才能提高AI的准确性和有效性,从而创造出人们真正愿意使用并持续使用的解决方案。 在AI的应用中,准确性是关键。早期,对话式聊天机器人常常无法准确理解和回应用户的需求,导致用户体验不佳。而随着自然语言处理和机器学习技术的进步,AI能够更好地理解和处理人类语言,从而提供更准确和有效的服务。然而,即使是目前流行的Siri、Alexa等AI助手,其对话能力仍然有限,常常无法进行深入的、多轮次的对话。 未来,生成式AI技术的进步将极大地提升对话式AI的能力。通过将生成式AI集成到对话流程中,AI能够更好地理解用户的意图,并提供更个性化和有效的帮助。这将推动对话式AI在各个领域的广泛应用,并改变人们与AI互动的方式。 我对AI的未来充满希望,但我同时也对AI的能源消耗和环境影响表示担忧。当前,大型语言模型的训练需要消耗大量的能源,这已经成为一个可持续性问题。我们需要寻找更节能的AI解决方案,以减少AI对环境的影响。 此外,AI的监管问题也值得关注。目前,美国AI监管体系相对宽松,缺乏完善的结构,这与欧盟等地区形成对比。这使得AI的应用存在一定的风险,需要加强监管,以确保AI的安全性、可靠性和公平性。 为了更好地应对AI带来的挑战,我们需要加强AI教育和培训,提高公众对AI的认知和应用能力。只有这样,才能确保AI更好地服务于人类,并避免AI带来的负面影响。 我希望AI能够成为人力资源部门自然而然会使用的一种工具,并融入到日常工作中。同时,我也希望通过教育和培训,提高公众对AI的认知和应用能力,使AI成为人们工作和生活中不可或缺的伙伴。 David Teretsky: 在与Beth White的对话中,我了解到AI技术在职场中的快速发展和应用,以及由此带来的机遇和挑战。Beth White分享了她多年来在AI领域的工作经验,并指出了AI技术在能源消耗、监管和伦理等方面存在的问题。 Beth White强调了人类在AI训练和应用中的重要性,指出只有通过人类的参与和监督,才能确保AI的准确性和可靠性,避免AI产生偏差和歧视。她还呼吁加强AI教育和培训,提高公众对AI的认知和应用能力,使AI更好地服务于人类。 此外,Beth White还表达了她对AI未来发展的希望,她希望AI能够成为人力资源部门自然而然会使用的一种工具,并融入到日常工作中,成为人们工作和生活中不可或缺的伙伴。 总的来说,这次对话让我对AI技术在职场中的应用有了更深入的了解,也让我对AI的未来发展充满了期待和担忧。我们需要在充分利用AI技术的同时,积极应对AI带来的挑战,确保AI技术能够安全、可靠、公平地服务于人类。

Deep Dive

Chapters
Beth White, CEO of MeBeBot, shares her experience working with AI since 2017. She describes the evolution of AI from basic if-then statements to natural language processing and machine learning, highlighting the crucial role of human involvement in training AI for accuracy and trust. She also discusses the challenges of creating truly conversational AI and the potential of generative AI to improve future iterations.
  • AI evolved from if-then statements to natural language processing and machine learning.
  • Human involvement is crucial for training accurate and trustworthy AI.
  • Generative AI has the potential to significantly improve conversational AI.

Shownotes Transcript

Translations:
中文

The world of business is more complex than ever. The world of human resources and compensation is also getting more complex. Welcome to the HR Data Labs podcast, your direct source for the latest trends from experts inside and outside the world of human resources.

Listen as we explore the impact that compensation strategy, data, and people analytics can have on your organization. This podcast is sponsored by Salary.com, your source for data, technology, and consulting for compensation and beyond. Now, here are your hosts, David Teretsky and Dwight Brown.

Hello and welcome to the HR Data Labs podcast. I'm your host, David Tretzky. And like always, we try and find the greatest minds inside and outside the world of human resources to bring you the latest on what's happening. Today we have with us Beth White from MeBeBot.

Beth, how are you? Hey, David, I'm doing well. It's great to see you again. I know viewers can't see us, but we had a chance to meet at HR Tech this year. We did, and it was an extremely busy HR Tech, if you remember correctly. Oh, yes. You were nonstop doing your podcasts. Yes, yes. And the good news is, is that they've ended now. Finally, we've produced and we've published the last of them. So...

Now we're looking forward to next HR Technology Conference 2025. Wonderful. Beth, why don't you tell everybody a little bit about you and MeVBot? It's great to be here today. And I'm Beth White, founder and CEO of MeVBot. At times I called myself the chief bot person.

because it was a lot of fun in the world of HR. But you're very much a real person. I am a real person, but there are people that operate AI, so that's one thing to always keep in mind, and that's part of what I'm sure we'll discuss today, but...

I spent my early career in HR working in all different facets of HR. Frankly, I left the profession for years being a little bit frustrated and came back to bring solutions. And really, it was the advent of seeing a lot of different types of AI technology, you know, kind of popping up literally on consumer facing websites and the forms of chatbots.

that truly inspired me to say, hey, this is a time where we can bring solutions to HR, for HR, that are really designed to help improve operational efficiencies and free up some of that valuable time that HR needs to, frankly, be more strategic to the business and provide that overall value. And now more than ever, with the world of AI shaping our businesses and our daily lives,

You know, we're out there evangelizing and educating often about, you know, meeting people where they are on their journeys to learn about AI. Perfect. That's wonderful. So, Beth, we ask everybody this. What's one fun thing that no one knows about you?

You know, David, everyone has these great career paths. Mine has taken a lot of different turns over time. But one fun fact is when I left college or after I graduated from college, I moved out to the Pacific Northwest and I

frankly, was a little bit lost as to what to do next with thinking about law school, thinking about other things, and had a chance to work on a fishing boat in Alaska. Wow. And help pay off my student loans. There you go. So it's one of those fun facts that, you know, I'm still looking back thinking that was pretty crazy.

It's not as, you know, the dangerous catches TV shows. I was actually on one of those larger fishing boats that frankly operate with, you know, moving factories. And talk about cold. That was a very cold experience, you know, in the middle of the Bering Sea off of the coast of Alaska. Yeah. Wow. That sounds really cool. Well, and actually really cold as well. Yeah, exactly. It is.

The reason why Beth's mentioning this is because it's December here in Massachusetts, and it is freezing cold in my office right now. So, yeah, yeah. Well, anyways, I actually feel even colder thinking about the Barry Strait as well as being in Alaska on a fishing boat. I'm sure you've got some really cool stories, which...

I do. I mean, you meet a lot of interesting people in those experiences and you learn some different things about, you know, professions and ways of life and even food production. And let's just say you were not the HR person on the boat. I was not. Because that would have been a very difficult job. That's right. That would have been a different job. Yes. Yes. There you go.

Well, today we've got a very interesting, I would say not very different topic than we've been talking about over the last six months. And it's a very important one. In fact, I think this harkens back to one of the first podcasts on HR Data Labs, which is ensuring ethical AI for HR by ensuring that humans are in the loop to provide supervised training to AI.

whether it's people analytics or in the data sets. So Beth, our first question is, how long have you been working in artificial intelligence and what have you learned over the years that can help us with this? Oh,

Well, there's so much that I'm continually learning. And I started the process of really digging into AI back in 2017. And as I mentioned, I was starting to see, you know, what was called at the time conversational chatbots, you know, popping up on different types of consumer facing websites. And I thought, how do these really work? And trying to unravel it a little bit or don't work, which is...

which was the case at that point in time. There was so many use cases for AI on, you know, for example, your bank's website or, you know, a cellular carrier's website, and you thought it would be helpful, but yet you were spun into a circle. And so in digging into the technology more, it's really a matter of, you know,

There was a movement from if-then statements, meaning decision trees that were essentially the first iteration of conversational chat, to natural language processing and machine learning and the basis of the technology that was brought into tools that we use daily like Siri or Alexa or Google Home. Right.

And the more, you know, companies or entities started to develop in natural language processing, there is this component of machine learning and machines only learn with the humans involved. And so that was the point in time when I was really able to see how do you train AI to respond more accurately?

Because in what I foresaw as how I thought MeBeBot could come to life, the only reason it would succeed is if there was a greater level of accuracy than what I was seeing today or at the time, seven years ago now, in the consumer-facing website chatbots, right? Because they weren't accurate. They did not build trust or trust.

or loyalty. And so that was a big kind of aha moment is to really start to uncover, you know, how do you train AI? Where does the human in the loop come into the process so that you can have a solution that people want to use and want to come back to using time and time again?

But I think if we look at even the current iterations of Siri and Alexa and some of the others, even to the extent of which if you're using chatbots today inside of web applications, they're still not conversational. In fact, they're barely reactive. So having second or third challenge questions, it's not even, frankly, if thens, it's,

asking a question and then trying to follow up, the thread is lost because the first question doesn't get followed up at all. And so I guess the question is, how have we evolved at all, even in the chatbot technologies, to be able to answer questions better by being able to get that secondary or tertiary, at least explanatory, or revision

that the technology can understand so it makes it easier for the consumer? Yeah, that's a great question, David. And I think right now we're in a really cool space. We're at a tipping point where we're going to see a whole next iteration of conversational chat that actually works. And

And really what's going to make that possible is the evolution of generative AI has been able to produce ways that you can actually inject generative AI into conversational flows so that you can continue down the process with, uh,

user or an employee that's interacting with the chatbot to guide them through a process, to guide them to getting more answers to their questions. So you're right. It's been challenging because of the ways that AI learns. A lot of times conversations kind of stop flat and

When that was happening in the early days of Mebibot, what we would do is we'd use AI to surface related topics. And then even in companies that were using AI chatbots on their websites, they were doing escalation paths. So that's how it was kind of being handled. But now in the era of AI with different what's called semantic kernel and other types of

interjecting of code or algorithms into the process, we're going to see a huge surgence of new activities that are going to drive the adoption and usage of conversational AI in the coming year. I think it's definitely going to happen in 2025.

Like what you hear so far? Make sure you never miss a show by clicking subscribe. This podcast is made possible by salary.com. Now back to the show.

So what are your biggest concerns currently about artificial intelligence? Oh, David, there's a number of them. I mean, being in this space for a number of years, everything from, frankly, I posted something today about the environmental impact of AI and the energy usage of AI. It's a sustainability issue, right? We see Microsoft, Google, and Amazon all within the last six months have

purchase nuclear power to be able to, you know, get and supercharge AI to the level of capacity that's needed to process AI in the generative sense. It's really kind of sucking more and more of the energy needs to do so. It's fascinating, isn't it, that we've gotten to a place with Moore's Law where our processing power is so amazing that we're literally able to create energy

almost neural networks for our computing systems. The...

watches we have on our wrists are more powerful than any computer we had that existed in the entire world before, you know, the 1980s. But what we're doing now is creating these things that are requiring us to create or to innovate or even to go backwards in our energy production to be able to withstand the energy needs of

that this processing power is going to require. It sounds crazy, but it's actually very true. It's just mind-boggling. It is. I mean, you're bringing up sustainability. Yeah, well, and who would have thought that it's not even the internet, but it's AI that's going to draw all of these resources for just processing and reprocessing data.

Yeah, I read a statistic recently that said for every 100 training AI tests, it's like having left your, you know, hair dryer on for a couple hours. You know, it's just the kind of usage of energy that is just required for some simple, you know,

algorithmic calls, you know, is again consuming the energy. So what do we do about it? I know that it is not being lost on even large, you know, the entities purchasing these additional sources of power from whatever source they may choose. Right.

But I do think that there's ways that people who are developing within this space their own technology solutions can come up with methodologies where it's not actively calling the AI for everything possible. And that's really where we are at the era of large language models and what we'll likely see in 2025 is more of small language models existing. Mm-hmm.

that do not require as much energy consumption, but yet can do the same types of tasks for the specific business use case.

And so that's very exciting to see that we're already going from, I almost think of it as a funnel, like this huge funnel, like just everybody consuming everything, to going, let's just consume what we need. And so we're starting to see that happening within some of the newer methodologies for architecting AI solutions. But even though the scale, you're talking about some of the largest companies in the world that are looking at this, right?

The scale hasn't gotten to a consumer activity base where everybody is pouring AI into their phones.

and thus they need their phones to get recharged more often, or their iPads, or their computers. And that energy suck as well, because if you're doing, well, it's probably not large language models, but you're probably doing, to your point before, something smaller, like the image playground in Apple's new iOS. That's still going to suck down resources. So your point about, you know, one of your first concerns being sustainability, right?

It's not going to get better. It's going to get worse, right? You know, I try to study the futurists that are looking at these types of big topics.

And yes, it's probably going to get worse before it gets better. So let's just all hope that we have people who are developing within these technologies that are being more mindful to it, and then we'll get to a better place sooner. But yes, as hardware gets, you know, sophisticated to leveraging AI as well, it is definitely going to be another draw on it all because you're right.

There isn't the consumer adoption just yet, but it's coming. Not yet. It is. I mean, the newest versions of iOS still don't have, for everybody, the AI functionality that they had been promising in iOS 18, I think it is, which is getting released now and people are signing up for it now. Yeah.

So I guess the next question within that is, what are your other concerns? Because I still haven't heard you talk about the machines taking over, the bots taking over yet. Well, you know, and that obviously is the concept of the sentient computer and when computers can think. And, you know, Sam Altman maybe a month ago said we're a thousand days away from a sentient computing power being released upon the world.

And, you know, that is very nerve wracking, you know, in the sense that do we really know what we're doing with that playground that we're releasing onto the world? Yeah.

I see regulations in AI that were starting to be more forthcoming, starting to be more pulled back now. I mean, the EU has been out putting legislation. Many of the European countries are even adding their own addendums or different types of policies on top of the EU act that's in play right now. But in the U.S.,

You know, I live in Texas. It's the whole country is the Wild West. You know, in Texas, we usually say it's the Wild West. But AI in the U.S. is very much unguarded besides a few of the states that have some regulations. I was just going to say most of the states are either reviewing during conference or they're drafting legislation.

to either put limitations on or at least give privacy limitations, if not copyright limitations, which is another gigantic problem right now in the world of AI. But they're trying to give those kinds of at least

Initial pieces of legislation. And I was doing research for a conversation I gave or a presentation in Hawaii. And I think at that time there were 40 states at least introducing bills or not, again, in conference to start drafting bills around it. So it's going to happen at some point in the U.S.

For sure. I've been following all the legislative acts that have come from different states from, again, like you mentioned, and many that are still being drafted and have years to come. So is this middle ground or what do we do now between having access to the technology without a lot of, you know, constraints?

And it really requires us as individual people, you know, to be smart about leveraging the technology, whether it's for our own usage as consumers or within the workplace. And that's where, you know, frankly, the role of HR has such an important part to play within these conversations as companies are looking at creating AI governance policies. Some have them, some do not. Right.

It is complex to go through, and probably what you have today may not even be what you need six months from now. So this is kind of leaving some organizations a little hung up from a business standpoint because there's not as many types of rules to follow. But the problem is, in the absence of rules, you're going to have a lot of people who have downloaded ChatGPT or at least gotten a login. Right.

downloaded it to their phones possibly, and have started drafting requests or prompts

in ChatGPT, which potentially might actually have confidential information, not just within the context of the things they're saying, but in the context of where they're doing it from and what they signed up with. So, you know, if you're from xyzcompany.com and then you put in a request in ChatGPT that you want to develop a new compensation philosophy, for example, it knows it was from you and it knows what you're asking. So...

It's, you know, in that case, it's a little bit daunting, though. You know, at least if you're IT trying to police all that. For sure, because I do think that's exactly what's happening right now. If companies have said, hey, we don't know enough about AI or generative AI to give everybody carte blanche to use these large language model tools, they're saying, no, well, people are just going to be doing it on their own. Right.

Right.

can be at least protected if, of course, you have your own type of subscription with the provider, etc. So I'm hoping companies are moving toward that because people are curious. They want to test it out. It's fun. You know, I mean, once you get started, you start to see it as an instrumental tool that you want to use daily in your work. But with all the caveats that

You have to be careful about things. And to the extent in which some companies are developing walled gardens where people can play in a safe way without thinking that that's going to get outside those walls, yeah, it might be safe. But again, on the consumer side, I think people are just...

I don't want to say they're ignorant of it because I don't think they know enough to be ignorant, if that makes any sense at all. It absolutely makes a ton of sense. And that's just it is it's the more we can do to educate people on like, how does this work?

because right now you see any one of these large language model user interfaces is just this little prompt box. And you have no idea how the magic happens behind the scenes. But if there's ways that people can start to understand, like it would be very cool some of the times this type of technology would say, I've produced this answer to this question because here's what happened behind the scenes to get you this information. Right.

giving people a little line of sight to the, to the, to what the technology is doing, how it may be sourcing and scraping publicly available websites, you know, using it to deduce down based on your human natural language cues, you gave it, how much a difference in your verbiage prompt engineering is going to impact your results and,

The AI almost needs to teach people how to use the AI and surface what's going on behind the scenes so that we can all understand the risks and see the rewards as well. You know, it reminds me of Beth. Maybe you don't remember this, but if you remember fifth and sixth grades when we were taught IBIT and OBSIT, when we were creating our book reports. Right, right. And, you know, what were your sources? Yeah.

sources. Oh, you mean where did I copy this from? And literally, that's what you're saying. You're basically saying, where did you reference this information? Because what I found out when I was at HR Tech was sometimes, sometimes the AI makes stuff up based on things that it's been trained. And it basically kind of reads between lines, which it really shouldn't do.

That is so true. I mean, I love your example because, you know, I was a history major in college and I used to write a ton of reports where, you know, you're having to cite every little source you ever used. And it's amazing because you just magically get this information yet you don't know the source. Sometimes I have received citations in the responses I've received back or I ask for it as a prompt.

Show me where you receive this information from. I need to be able to cite it.

because I do think people should dig more. It's just like, you know, anything from the media or reading a news article. You want to know where did this all come from? And getting that validation of the information is something we all should ask for as consumers of AI. And I think one of the things that we were going to talk about during this podcast is actually having people who are

kind of looking at the answers or looking at the requests and real people being able to provide input on whether or not the models are trained appropriately and are the responses accurate. So how do we get to a place where people are monitoring the AI? Yeah, I mean, today, even at work,

you know, the companies that are hosting and managing the whole and creating the large language models. I mean, there are people behind the scenes, right. That are seeing, you know, the results of prompts that have been sent to the engine. Right. And they're making, you know, human assisted guidance on some of the outliers or things that require more training. And, you know,

So with that said, the more that you can have diversity, frankly, in the people that you hire for these particular roles, that's why I'm part of a group called Women Defining AI, because there's only 25% of all women are involved in AI. And if you have more of a gender balance, ethnic race, economic, societal balance, and people who are behind the scenes,

training these models, they'll start to not do what was one of the more famous cases from years ago where Amazon had a process of scanning resumes in the early eras of AI where they were using words like Code Ninja Warrior. You, as a developer term, you got surface to the top of the pack, but

If you had that in there.

And that gives people a little bit more grounding. So it's really that whole term grounding the AI means that human beings have to help ground the AI and the diversity and the people working to ground will help create, you know, better solutions that we can be more comfortable with that do not have the biases and the discrepancies that we would rather not surface. ♪

Hey, are you listening to this and thinking to yourself, man, I wish I could talk to David about this? Well, you're in luck. We have a special offer for listeners of the HR Data Labs podcast, a free half hour call with me about any of the topics we cover on the podcast or whatever is on your mind. Go to salary.com forward slash HRDL consulting to schedule your free 30 minute call today.

So I guess that leads to the next question, which is what are your hopes for AI for the future? Well, what I love to see is more and more people, you know, getting involved and being part of what I call almost like the front lines from either people who are seeing some of the

technological risks and they're out there if needed to prevent some of the issues that could be forthcoming in a legislative sense. So that's great seeing that legislative kind of arm starting to happen a bit. Being in Austin, we have a community called the AI Alliance, which is a group of

of individuals who have all come together from both business and the education sector, as well as even public service. And so what we're attempting to do is say, hey, AI is not going away. It's going to be part of

Right.

training centers to help educate and train people on AI today will be able to have a more employable workforce for the future. So, which I think is pretty powerful. Well, it's not even just an employable workforce. It's also a more educated consumer.

Because as we both know, the AI market isn't just about getting work done, it's about buying stuff. You know, like Alexa has forever been that platform

that you could ask, you could generate a prompt and ask it to put stuff in your basket. And I can't use those words together because it will do that. That's right. Because you have one sitting right by. The other room, definitely not here. Yeah, yeah. But the ability for us to actually have it be a part of our lives,

will enable us to be able to utilize, whether it's the Apple Watch I have or, you know, the phone that's sitting next to me, and be able to be better consumers of the things we already have living around us.

And we've been using AI for a number of years now. I mean, if you think about it, when Amazon first did start as a company, you know, they were a bookseller and then they started to recommend books based on other books you read. Well, that was an algorithm, right? That's the basis. And using mapping technology, who travels anywhere without launching a type of a, you know,

Google Maps or Apple Maps or what have you. I mean, we just don't. And Waze is a great example of human-assisted training of AI because when Waze came about, it's like there's an accident and you would report it. And that was data going into the algorithms to help you with your transition. But a lot of times it was so naturally happening around us, we didn't really know what was behind the scenes again. So...

Again, that's, you know, other, you know, areas of concern. But I do see hope in the, you know, the participation level from, you know, all different genders and ages of individuals embracing AI. And again, I just can't stop harping on the we have to have diversity. We have to have, you know, gender. We have to have the age gap.

Absolutely.

And it actually, the language models that are existing today are actually not terrible. They're actually pretty good at being able to hear someone's voice, whatever dialect they have, whatever intonation they have, and be able to pretty accurately transcribe

what they're saying in a relatively short period of time. And that's just amazing because I was a naturally natural dragon. I forget the dragon technology's name, but I used that decades ago to try to write a book and it was awful. I spent more time correcting than I did actually talking. So now I can do it easily. I can actually talk into my computer or my phone or my watch and it does a really good job.

It is amazingly, it has improved so much, but it improved because you kept using it, right? And the more you used it, the more it got to know your voice and your intonation and your pronunciation. So it's just a matter of, you know, don't give up people.

If you're trying AI and it doesn't quite give you the right results the first time, you just have to try and try again. And we're not always used to experimenting. And that's also been a challenge in the HR profession. Starting my career in HR early days, it was you released a payroll system. You better be exact to the penny on everyone's payroll.

And so we were never taught that you could try technology and fail because the failures were pretty risky.

And what AI really... Oh, they have compliance capability. They have compliance issues related to it as well. And people don't want to... You know, you don't want a million phone calls about someone's paycheck being off, right? And so you worked very hard to use technology to be exact and to be detailed and to use it as prescribed. And now we have this technology that is much more open and loose and...

And that's where it's a little harder sometimes for people to make that transition because of the trainings that they've had in the past on how to adapt and use other types of systems that they may have used with inside the workplace or even personal use.

Yeah, if I can add on one hope that I have for AI. Oh, yeah, what is your hope? It's that AI becomes another tool that HR naturally gravitates to. And also that we're, like you mentioned, that we're encouraging those schools. We're encouraging the community centers, the community centers where the elderly are. Everybody try and adopt AI.

education and training courses to be able to raise the level of acumen of the populace so that, you know, it's because AI is surrounding them that

Forget about the Will Smith movies. Forget about the things you watch on TV. Let's cut through the noise and educate people on what it is and why we're living with it today and how we can utilize it so that it becomes a partner with us at work. It becomes a partner with us at home. And we realize the benefits of it instead of it either forcing itself on us or...

Or the legislation, which is really probably meaningless, gets passed, which is, you know, we're going to limit how it works in your world. That's never going to happen now because it's in our technology. But that's my hope is that people become more educated and they realize what it is before it becomes too late. Yeah.

Yep, I wholeheartedly agree. That's where we're at is a lot of need for education. And at least I've seen a number of great courses. There's opportunities to learn. And if you want to learn, there's so many opportunities to learn for free, too. Hopefully people pick up on it and actually take it because it's one thing to offer it.

It's another thing to get the impetus to go do it.

And so there's just a lot of different natural ways to bring that into workplaces. And as we know, it was done within our phones before we even knew it was happening, right? Yeah, exactly. Well, at some point soon, that AI overlord is going to employ us to help the bots. Yeah, it is. Yeah.

It's forthcoming, so... It's happening. Beth, thank you so much for joining us. We really appreciate it. Your insights are invaluable in this. And because you're swimming in it on a daily basis, maybe we'll reach out and we'll have you back on the program. Sounds great, David. Thanks for the opportunity. And it's good to see you again. Good to see you too. And thank you all for listening. Take care and stay safe.

That was the HR Data Labs podcast. If you liked the episode, please subscribe. And if you know anyone that might like to hear it, please send it their way. Thank you for joining us this week and stay tuned for our next episode. Stay safe.