We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Martha Curioni - How to Responsibly Integrate AI into HR

Martha Curioni - How to Responsibly Integrate AI into HR

2025/3/6
logo of podcast HR Data Labs podcast

HR Data Labs podcast

AI Deep Dive AI Chapters Transcript
People
M
Martha Curioni
Topics
Martha Curioni: 负责任地实施AI意味着以最大限度地减少风险和负面结果的方式设计、开发和部署AI,尤其是在涉及招聘、晋升和裁员等高风险HR场景中。这不仅体现在模型选择和开发上,还包括部署阶段的额外步骤,以确保人们按照预期使用模型。负责任的AI实施需要透明性、最小化偏差影响、支持公平性,并使员工和管理者能够做出更好的决策。在实施AI之前,必须仔细检查用于训练模型的数据,确保其清洁、可靠且可信赖。在使用AI模型之前,需要进行大量的数据清洗和准备工作,并进行描述性分析以了解过去决策中的偏差。将AI整合到HR流程中需要重新设计整个流程,而不是简单地将其附加到现有流程上。在设计流程时,应考虑加入一些检查点,以防止用户过度依赖AI并忽略潜在问题。由于HR数据并非完美,AI模型可能产生不准确的推荐。因此,目标应该是让AI辅助人类做出比单独依靠人类更好的决策。可以通过数据清洗和偏差检测等方法来处理AI模型中的偏差或不良数据问题。为了确保AI被恰当地使用,需要重新设计流程、进行培训,并持续监控AI的使用情况。持续的反馈循环对于改进AI模型至关重要,这需要用户提供有关数据和模型的反馈。 David Teretsky: (主要围绕Martha Curioni的观点进行提问和补充,例如数据安全和隐私、模型位置等问题,并提出一些实际操作中的问题和顾虑,例如AI的过度依赖和成本问题。) Dwight Brown: (主要围绕数据治理、数据质量和AI模型的透明度等问题进行提问和补充,并表达了对AI模型潜在风险和用户过度信任的担忧。例如,提到ChatGPT输出结果的准确性问题,以及人们容易过度信任AI输出结果的现象。)

Deep Dive

Chapters
Responsible AI implementation in HR involves minimizing risks and negative outcomes associated with AI use in HR processes, such as hiring, promotion, and compensation decisions. It requires transparency, bias mitigation, fairness, and empowering employees and managers to make better decisions. Data quality, including data cleaning and preparation, is crucial.
  • Responsible AI minimizes risks and negative outcomes.
  • Transparency, bias mitigation, and fairness are key.
  • Data quality and preparation are essential.
  • Data security and privacy are important considerations.

Shownotes Transcript

Translations:
中文

Welcome to the HR Data Labs podcast, your direct source for the latest trends from experts inside and outside the world of human resources.

Listen as we explore the impact that compensation strategy, data, and people analytics can have on your organization. This podcast is sponsored by Salary.com, your source for data, technology, and consulting for compensation and beyond. Now, here are your hosts, David Teretsky and Dwight Brown. Hello and welcome to the HR Datalabs podcast. I'm your host, David Teretsky, alongside my best friend, co-host...

And partnerat salary.com, Dwight Brown. Dwight Brown, how are you? I am wonderful. How are you doing, David? I'm okay. Well, we just got over some health scares, which is good because today we're talking to one of the most brilliant people we've actually had on the HR Data Labs podcast, Martha Curione. Martha, how are you? Hi, thank you for having me back. And I am good. It's been sunny these days, so I'm enjoying the sun while it lasts.

Yes, yes, we're getting into winter. Well, we're actually getting into fall, which for a lot of us turns directly into winter with very little lag. But for those of you who don't remember Martha, Martha and Dr. Adam McKinnon were on many moons ago, and they were talking to us about how we can use machine learning to fix data problems in HR.

And it was one of the most popular episodes. And we're going to actually have a link back to that episode in the show notes. But we're also going to have a link to the code that Martha had built. And it's on GitHub. So easily accessible and extendable. And we're going to probably speak a little bit about that today. But more so, we're going to get into another topic. But Martha, why don't you explain to some of our newer guests...

who you are. - Hi, yeah, so I, let's see, where do I start? I have an extensive background within the HR space, having started in recruiting and worked my way through kind of talent, I guess, workforce strategies space. And recently, or not recently anymore, time flies, a few years back, I decided to train myself as a data scientist. So that's when I learned how to code and build AI and machine learning models and so forth.

And now I am working as a people analytics consultant. I do advanced analyses. I support implementation of people analytics tools, looking at processes around AI as HR organizations are looking to implement that and so forth. So that's kind of where I am today. And one of the more interesting things about Martha, Martha, where are you located?

am based in Italy, which you cannot tell by my accent because I'm originally from California, but I moved to Italy about four years ago. Hashtag jealous. One of my favorite places in the world.

So Martha, what's one fun thing that no one knows about you? I don't know if I would say no one knows because of the whole class of people that know, but being in Italy and being an expat and working remotely, there are days where the only other adult I speak to is my spouse, which I love him, but sometimes you need to speak to other adults. And so I decided to sign up for a theater class, which is all in Italian. Wow.

And it happens once a week. And I, you know, it's definitely brings me out of my comfort zone, even if it were in English. And so then being an Italian, it takes it to a whole new level. But at least the extrovert in me gets a little bit of social interaction once a week. So I'm enjoying it. That's wonderful. That's cool. That is really cool. Now, are you fluent in Italian, Martha? In a social setting, yes. When it comes to work, yes.

I would say very good level. I wouldn't say fluent, but in a social setting, yeah, I can have a conversation. Well, now you're going to test that boundary. Oh, in the class. Yes. I mean, I thought you were going to ask me some questions. No, gosh, no. I was waiting for that too. I'm like, yeah. And? No, that's about my limitation on Italian. No, we're good. We're good.

So that's really cool. So we're going to see you win a Tony Award at some point soon? I don't know. Maybe. We'll see. Or whatever the equivalent is in Italy. I don't know what kind of awards they have. Actually, it would be the Tony Award because that's Italian, right? Yeah. The Anthony Award. Hey, Anthony. How's Martha doing? She's great. She's really great. If you guys could see twice right now, you would see her. Oh, my God.

Oh, who let you out of your cage today? Sorry. Hashtag dad humor. So let's transition to topic now, because this is the reason why we love doing what we do. We're going to talk about a really cool, very, very important topic for today.

And that's the responsible implementation of AI in HR.

So, Martha, let's talk about it. What does it actually mean to implement AI in HR in a responsible way? Yeah, so to start, let's just define what responsible AI is for anybody that doesn't know or is not familiar with the term. It's essentially, it involves kind of the design, the development, and deployment or implementation, if we want to use that word interchangeably.

of AI in a way that's going to help you to minimize risks that could happen with using AI and other negative outcomes. So if we translate that then into an HR setting, you think about many, so there are some HR use cases that are lower risk, right? Maybe automating tickets and some of that kind of stuff. But there are many HR use cases, at least all the ones you hear about if you go to a new HR technology conference, right?

Are things like, you know, who do we hire? Who do we promote? In some cases, who do we fire if people are looking to, your companies are looking to lay off employees?

And, you know, or how much of a salary increase I've seen people use it to inform salary increase recommendations. So, you know, minimizing risk and other negative outcomes, I think we'd all agree are extra important given these use cases. And this is why I think companies really need to take the appropriate steps to ensure that the AI that they're going to be using is implemented in a way that is transparent.

that minimizes the influence of bias, supports fairness, and really empowers employees and managers to make better decisions, right? So that to me is what responsible AI means.

And that doesn't only mean picking a model or ensuring the model or developing a model that offers these things, right? That's only the design and development side. The deployment side is then taking the extra steps or the additional steps to

to make sure that people are using the model in the way that it's intended to be able to ensure that these things are happening, right? You can't just put the tool in people's hands and trust that they're going to use it the way that they're supposed to. That never happens. Is there another aspect to it, which also goes to the data that you're going to use to train the model on?

You know, what data are we using to the point we made before? Has it been cleaned? Do we have faith in it? Do we trust it? Have the decisions that were made using the data, are those things we want to actually be basing our forward-going decisions on? Does that come into it?

For sure. You know, that becomes, you know, one of the key points, right? And selecting the model or picking a model and data that's going to be used. So some vendors out there, maybe they trade in on their own data and then they want to unleash it on your future decisions. Okay, well, I don't know if that's going to work. Many organizations don't have their data in a place where they can do it with their own data.

So there ends up needing to be a lot of kind of data cleaning, a lot of data preparation and so forth that's needed. And really understanding, you know, even doing a descriptive analysis before you get to that point to understand, you know, looking at maybe if we use the example of promotions, past promotion decisions, do we see that there are groups that are maybe, you

you know, getting promoted less or more or what have you in proportion, obviously, to their share of the overall headcount, right? The overall population. Right. And really understanding your data first is important, I would say for sure. One of the other considerations that I'd ask is, is there also a

potential issue with the where is the model located? Meaning, is it on our premise or is it in the cloud or is it on premise of the application provider or the model provider? And the reason I ask that is because

The wild, right? Having our data and having our model and having our decisions in the wild and who would have access to the data, the decisions, the outcomes. Is that something that comes into this conversation as well? Or is that really just kind of a, you know, don't worry about that, David. That's down the road. That's not an issue for right this second.

No, I think it's definitely. So I would think it's separate from responsible AI in the way that I'm defining it. But when it comes to AI in general, you know, it's definitely important, right? Even, you know, for example, I don't recommend somebody saying, oh, let me use my personal account with ChatGPT or Claude or what have you, take all this employee data and upload it into the, you know, and ask it to analyze the data for me, right? Because that's,

There are a lot of risks, but that's more of a data security privacy side as opposed to, you know, making sure that, to your point, the data is appropriate, the model does not have biases, and then it's being used as intended. It would seem that the, yeah, part of that data quality aspect of things is just understanding where your data is coming from, where it's pulling from.

What are the data sources you can control? What are the data sources you can't control? For sure. And I would think the other thing I would add, and I've gotten on a high horse about this lately. It's something that I bring up anytime I can in a conversation, are the processes that are capturing data. And I think so many times there are processes that are designed or sometimes just haphazardly come together, right?

And then there's data. And a lot of times the people that are designing the processes don't think about the data implications or, you know, it's kind of here's the process, here's what we're doing. And the data is an afterthought. And so what that means is, for example, you know, if I want to look at mobility for my organization, for whatever reason, I'm

but mobility moves within the company are not captured consistently in a way that allows me to then map those, then it makes it almost impossible for me to do that kind of analysis. And you can take that to, you know, promotions. If promotions are not captured correctly, were they promoted or did they apply for another job? And with that job became a promotion, right? And then if you're going to use that to inform future promotion decisions, then,

How are you going to do that if you're not capturing the data consistently? Well, that gets to Dwight's favorite topic of data governance, right? And making sure that HR has a good data governance model. And that's exactly it, because it really gets to that data trust factor. I think that's one of the pieces with AI that's a little bit scary is the fact that

You know, there's a big aspect of this that is just sort of a black box. You don't know how the data is being put together. Sometimes you don't even know all the data sources that you're dealing with. So, you know, it really gets to that data trust factor. And how do you get that? I think that's a key question. Yeah.

For me, one of the ways to address the trust factor is when you have explainable AI as part of the interface or the model output.

So there's, you know, there's some models inherently have it, right? Regression models, you can look at a driver analysis or with other cases, you might have to put additional tools on top, right? So there's SHAP, there's LIME and probably others that are coming out to be able to offer that transparency so that you say, okay, this

These, you know, we're recommending David for a promotion. Here is why. Here are the things that... Here are the reasons that we are recommending him. That way the user can then look at those and either agree or disagree, right? Oh, no, that's not true about him. Or, yes, that's true, but...

That's not a factor that we want to consider in this case, whatever it might be. But that's how you, A, address some of the trust issues. And then B also, again, it goes back to the AI shouldn't be making the decision. The human should be making the decision. And by empowering them with that information, that's how you ensure that that happens so that, again, they're using the AI as intended. ♪

Like what you hear so far? Make sure you never miss a show by clicking subscribe. This podcast is made possible by salary.com. Now back to the show. Well, why don't we talk about that as part of the second question now, which is for HR organizations that are planning to actually implement some kind of artificial intelligence, what are the most important steps that they have to take to ensure that it's actually going to be implemented responsibly?

So the first step is something we've already covered a little bit, which is first check your model, right? Don't distrust the vendor or the data scientists that you hired to make sure that they're doing every, taking the steps necessary to make sure that it's a good model, right? Right. Make sure it's transparent. Make sure the end of the users can understand the output. Ideally, it will have explainable AI. So it's not that black box that Dwight mentioned and test the model yourself, right?

run through it, see what recommendations come out. And, you know, do you notice bias? Are you seeing bias come through? Do the recommendations make sense? You know, that's how you want to test it before you implement anything. Once you've done that and you say, okay, the model's good. You know, I'm good. I like the recommendations. Then you want to be clear about your goals and objectives, right?

of how are we going to be using this model? What are the outcomes that we expect to have? Is it, you know, more fair decisions? Is it saving time for managers? Whatever it may be, define those ahead of time so that over time you can track those measures.

and decide, is it working? Is it doing what we wanted it to do? And if not, why? And should we keep using it, right? Because otherwise you're just going to keep using something that maybe is making things worse. The next part is, and this one I can't emphasize enough, is you need to redesign your process

around the AI. Don't just bolt it on top of an existing process because if you do that, there's a really big risk that it's not going to be used as intended or it's going to not get used at all, which is also a shame if it is something that you're hoping can help make better decisions. So work through from beginning to end, what should the new process be?

incorporating AI, incorporating checks and balances, making sure that the users, you know, that there are points where the users are being prompted to make sure that they're not just auto-clicking through things and so forth. It actually reminded me, it reminds me of an example I heard. I was listening to a podcast. Oh gosh, I can't remember who it was. But anyway, the

that NASA, they build, and because they've been using automated systems for years, right? And so they build in these kind of, I guess, vaults

that everybody knows that they're there so that you don't go on autopilot because you know that they're going to be random, like bad things coming up or things that you shouldn't trust so that people don't just go on like autopilot and do things. Right. So can you build a process incorporating something like that? I don't know. This is an idea that came to mind, but you know, designing that process with the process because then comes proper training, right?

Right. You don't give somebody a car without teaching them how to drive it. I don't know about that. You haven't driven in the U.S. maybe for a few years. Ideally, you would teach them how to drive it. You know, the risks. Ideally, yes. Yeah. The risks and everything. Right. What to do if this happens. Right.

And then you run a pilot. One recommendation I have would be to have, you know, one group do it with the model, another group do it without, and then compare the outcomes, right? And understand, again, are we achieving the objective that we want to achieve? And then over the long term, continue to monitor not only the outcome, but also how are people using it, right? As much as you can. Yeah.

One of the things that I think about with this is that it seems like the possibility of overtrusting the data is probably, you probably see it more in AI. Because if you think about AI output, for instance, if I go to chat GPT and put in a query, what it outputs is really bad.

sounds really good. And a lot of times it seems on point, but if it's a topic that I don't know much about,

I could be kind of starstruck with all the output and the way that it words things. And it's easy to forget just exactly what the potential pitfalls are with that. And I think that gets to what you're talking about, where there's got to be some education around it. There's got to be some understanding around it that's there up front. Otherwise, we end up

just sort of blindly trusting it. And there are a lot of studies that show, there's one in particular that comes to mind. I couldn't tell you where to read up on it, but you could probably Google it, where there was a building and there was an alarm, like a fire alarm or something. And they had a robot who was clearly taking everybody in the wrong direction. People knew what the correct direction was, but they still followed the robot.

And so people definitely get overconfident in the output of AI because they think, oh, well, this is technology. It's been trained and it should know better than me that is faulty or what have you. So for sure, you definitely...

have a lot of that. And let me, let me expand on that a little bit. We know and have seen that some of the answers that have been coming out of chat GPT are actually lies and that chat GPT actually doesn't know the answer. They're making guesses, which are wrong. And there are kids in school who have been using verbatim, the stuff that comes out of chat GPT. And it's just wrong because they're,

Whatever it's pulling from is just not true or it doesn't have enough answers. So it makes shit up. Pardon my French. And the one thing I want to talk about in the description that you just gave in the six steps is you're basing your career. You're putting your company at risk for developing a model.

And you need to make sure that the thing it's doing is actually doing it correctly. Now, you mentioned before, Martha, sorry, I'm going a little all over the place here, but you mentioned before that some of the ways in which AI has been implemented are the bots doing a specific task. Like,

Is this form filled out? No. Send it to the right person, get it filled out, and then it sends on either to another bot or to a person when it has the correct information or the information. And in that way, you can actually check it. You know what steps it's trying to follow and you can make sure that it's accurate and you can QA it. Some of these more interpretive models, some of the more sophisticated models,

The steps you mentioned, they're going to have to be pretty complex, aren't they? You're going to have to do a lot of QA work to make sure that the model is actually generating what it's supposed to. Yeah, I mean, that's where explainable AI comes in, which obviously is not available with all models, right? Like a large language model, explainable AI becomes a lot more difficult. Those tend to be a lot more black box, right? And so if we, so let's, let's,

First, address the models that you can have an explainable AI component. Those ones, you make sure that when they get the recommendation, they're also getting the

you know, kind of the reasons behind the recommendation. And then maybe in the process, there's some kind of step to make sure that they're reading that where they, you know, that they agree or disagree, or they have to add in some comments or whatever it may be. You have to work the other. I'm also a big fan of human centered design, right? So you're going to work with the,

the practitioners, the employees, the managers, whoever, to understand from them what is going to be the best way to design it so that it's not annoying to them. Because then you end up getting practices of people just putting in a space in that text box just to bypass it or what have you, while also making sure that you're achieving your objective. So those types of models, it becomes a lot easier to go through those steps of designing

hey, let's build in some of these checks along the way to make sure that people understand the recommendation, agree, and know that they have every power to disagree with the recommendation. When you get into some of these more black box models where the explainable AI is not as accessible, then it becomes, to Dwight's point, a lot more about the education side, right? Helping them understand that

you know, it could make mistakes or it can make things up or whatever it may be. And maybe that's where the NASA example comes in, right? Where you say, look, we are going to randomly give you fake answers so that you can, to keep you on your toes, right? And make sure you're checking your sources or what have you. You know, I don't have all the answers for that. Again, you have to work through the specific use case and your organization, the culture and so forth. But education is key.

It seems like what you've outlined is, and I'm not trying to demean it, it seems a very expensive process. And yes, it should be because we're building in a new technology. But the six steps you mentioned, there's going to be a real cost involved with not only implementing this,

the training, the education, the pilot, even just the technology and the data itself, that's a lot of investment. Or are you thinking that this could be relatively small things, small samples, and it doesn't need to be that expensive? I would say that for the technology, the cost of that and so forth, you know, that depends on the technology. Or maybe you have a team, an in-house team that's building something. But when it comes to, you know,

Redesigning a process, that's obviously a lot of work, as you mentioned, training and a pilot. But a pilot by nature should be small scale, right? So if you're able to do it small scale, test your assumptions, make sure it works, tweak the process because once you put it into place,

inevitably there's always something that doesn't end up working quite as you imagined it would. And then you tweak it and so forth before rolling it out to the broader organization or a business unit within the organization. You can still scale it out slowly, but

But by doing it in a pilot setting, you definitely minimize the cost. So then maybe you have, you know, one person within your team who is responsible for, you know, kind of this whole, those, these six steps, right? And the workshops around the process, the workshops around the training and everything else. But with the end goal, or shall I say,

why, you know, what your goals and your objectives are and measuring that is so much more important because you

You want to make sure that it's worth the investment and you want to make sure that you can accurately gauge whether the pilot was successful or not before you roll it out and do start to spend more money or in some cases, put your organization at more risk depending on the use case. Hey, are you listening to this and thinking to yourself, man, I wish I could talk to David about this? Well, you're in luck. We have a special offer for listeners of the HR Data Labs podcast.

a free half hour call with me about any of the topics we cover on the podcast or whatever is on your mind. Go to salary.com forward slash HRDL consulting to schedule your free 30 minute call today. Let's get to question three, which is to me kind of the one of the things that we've been kind of talking about most of the episode, which is

We all know that HR data, if people ever listen to this podcast, guests, you know that HR data is far from perfect. And if the AI is trained on that bad data, there are real risks that can make the AI generator or inform the AI to generate bad recommendations. How do those steps that we just outlined in question to help with that challenge?

Listen, I think it would be great to have AI that has perfect recommendations, but we all know that that's unlikely, right? Because we don't have perfect data, like you said. So my question to the both of you is, or even to the listeners, is can the goal just be to make better decisions than humans are going to make alone, right? I've done a lot of research in previous roles on DEI, and I just...

I really don't trust people to make good decisions if you leave them to kind of their own devices, a.k.a. biases, because that's why a lot of times the data is so bad, right? Because in the past, they've made decisions with these biases and so forth.

you know, the good news is that there is a lot that can be done to address bias or bad data in models, right? You can clean the data, you can test it for biases in many, many different ways. It's not just, you know, let me be clear. It's not just, let me take gender and race and age out of the model. There are other data points that can ask, act as proxies for those. So taking the steps, the appropriate steps to address some of those things and

Then you add on top the explainable AI factor or the transparency factor, and then you start to have kind of a model that hopefully can make recommendations that are going to be better than a person making it themselves. But

Again, as I mentioned before, you need to think beyond the model. You also need to think about, are people using it as intended? And so that's where the redesigning the process and the training really comes in to make sure that people are using the model. The human in the loop is not something that is just a term, but people are actually doing it. Because let's be honest, people...

are busy and in many cases just lazy. And so if they can just, you know, take the recommendation, how many people managers are managing way too many people because so many companies have tried to increase span of control and all of these other things to save costs. And then you put this tool and then that makes recommendations and they're like, hey, now I don't even have to think about this. I can just, you know, the model says to promote these people. Exactly. So, you know,

That's where, again, the process, the training and so forth come in and monitoring its usage to make sure that it's being used appropriately. And that continuous feedback loop that when things are discovered about the data, having a way to have that feedback so that people who are using the data, using the

Using the AI to to pull the data, kind of get a more refined lens the longer that they're doing this, because that's the you know, I think that helps to bring up the blind spots that that might otherwise be missed. And yeah.

kind of an overall process that just keeps going as opposed to being a defined starting point and a defined endpoint. There really isn't a defined endpoint. It's just a loop, much like data analysis with using Excel and everything else.

Oh, for sure. I mean, that and that could be part of your process, right? You position it not as a, hey, we're putting this step in here to make sure you don't go on autopilot. You position it as, hey, this is how you give us feedback. We recommended you and you promote David. Yeah.

You look at the reasons why you don't agree. You need to tell us why so that in the future we can make the model better. Right. Right. Yeah. Or you do agree and so forth. And that feedback loop. And I honestly, I don't know that there are too many tools out there now that

Chat GPT obviously has the little thumbs up and thumbs down at the bottom and stuff like that. But within the HR space, just thinking of tools that I've used, I don't know that I've really seen, aside from like, do you like this job recommendation or not? I don't know that I've seen too many opportunities to give that feedback. So it's definitely something any HR tech vendors that are listening, something to think about. Yeah.

Well, we've been trained on this a little bit, Martha, in the consumer side, because if you look at Netflix or other streaming services and they do the thumbs up, thumbs down, and is this recommendation, did you like this series? Did you like this movie? Thumbs up. Yay. Okay, well, I'm going to recommend more movies like this. So we're definitely getting that in the consumer side. And I think that does inform how the feedback loop can help with these recommendations and

Because at least you're getting that immediate feedback of, did this help you? Did you use this additional information to make your recommendation?

And in that way, then we can actually get at least some understanding about whether it was good or not. But it would be better if we actually had a little bit more like, are you kidding me? You're promoting David? He's a terrible performer. Why would you do that? Or, yeah, I think David's a bad recommendation because he doesn't have the skills or experience necessary for this. So it would be better if it would be more verbose, but at least the thumbs up, thumbs down is something...

I think at least we've gotten a little bit more used to. No, for sure. And I think to your example, you know, another thing to consider is not just giving it to managers, but making sure the HR business partner has the same model output so that they can also, as part of the process, hold the managers to account to make sure, you know, hey, you know, Dwight wasn't on the recommended for promotion list. So

Talk to me about why, you know, not in a way that's challenging them, right? Because you don't want them to feel like, oh, I have to work from the list. But, you know, talk to me about Dwight. Like, well, you know, what's going on there? Or to your example, you know, David, why are you suggesting that you promote him? He's a terrible employee based on other things you've said about him before, which I don't think that's true, David. But to extend your example. And so when you empower the HR team,

With kind of the same information, it positions them to be able to have those conversations, to challenge where appropriate, and to, you know, again, help make sure that you achieve your objectives. In many cases, these might be self-service tools. And if the manager requests kind of a slate of successors, let's just say for a job, and that HR business partner should probably get a console that tells them that the manager did something

Make a request for a slate of successors. So at least they have that good understanding to be able to check because otherwise they'd have to find out kind of after the fact, instead of knowing, you know, here are the alerts of things that that my managers have requested. And here's what the results were. So I can at least be informed as well as be a good.

business partner to them making those decisions and inserting myself to be able to provide context for that decision, if that makes sense. Yeah. And, you know, you might say, okay, well, where's the HR business partner going to get time to do that? But in their case, instead of them having to dig through the data and try to make that list, all they're doing is validating the list and having some conversations around it, right? They skip that first step. Yeah.

So it might be part of the loop. Yeah. Part of the workflow that we defined in the six steps.

We could talk about this all day. Do you have a couple more hours so we can continue? Fortunately, no. I have a little bit more minutes. The hours I have to go feed some hungry kids soon. No, I'm just kidding. Yeah, I myself am getting hungry for lunch. So I think what we're going to have to do, Martha, if you don't mind, we'll have to come back to this because there's going to continue to be an evolution of

of AI in the world of HR. We've been talking about it for years, but this year especially, and most especially if you hear some of the episodes that we have from the HR Technology Show in 2024, pretty much AI was everywhere. And so I think we're going to have to bring you back again, if we can get you back, to talk a little bit more about it. Not just the ethical nature of AI and the implementation

responsible artificial intelligence, but also then what happens when it goes bad or some other outcomes and kind of the lessons learned from that, if that's okay. Yeah, I'd love that. Well,

Well, Dwight, thank you very much. Thank you. Thank you for being with us, Martha. Thank you for having me. Martha, thank you very much. You're awesome. It's always a pleasure to talk to you. I always learn a ton from you. And that's the reason why we love having you on the HR Data Labs podcast. Thank you. Thank you all for listening. Take care and stay safe.

That was the HR Data Labs podcast. If you liked the episode, please subscribe. And if you know anyone that might like to hear it, please send it their way. Thank you for joining us this week and stay tuned for our next episode. Stay safe.