This is Dan Turchin from AI and the Future of Work. I've digitized my voice with the help of 11 Labs. For today's special episode, you'll hear commentary from Digital Me. The Real Me approved the content and, of course, approved the digital twin. Let me know what you think in the comments.
Today's episode is in honor of International Human Resources Day, celebrated every May 20th. It's a day to recognize the contributions of HR professionals and the positive impact they have on organizations and employees. It's also a time to reflect on the future of the HR profession. And today, that future is being shaped by AI. One of the biggest challenges facing HR leaders today is integrating AI into hiring,
How do we use it to accelerate hiring while ensuring fairness and compliance? AI is impacting every aspect of people operations, accelerating candidate selection, automating onboarding, and even predicting job performance. But it also raises serious questions. How do we prevent bias? Who's accountable when AI makes bad decisions? And what happens when regulation struggles to keep up with automation?
To answer these questions, we've brought together experts at the forefront of AI-driven HR. You'll hear from a startup CEO disrupting hiring automation, a top government commissioner shaping AI regulations, an HR leader from SHRM, and a future of work visionary helping organizations create ethical AI cultures. But first, let's start with the fundamentals of
Why is AI in hiring both an opportunity and a risk? And what happens when automation goes too far? Our first guest is Sean Baer, CEO of Fountain, a platform that has helped more than 80 million job applicants in 75 countries. Sean's career spans industries from fleet management to ad tech, always at the intersection of software and scale.
Listen as Sean explains why automation must be balanced with human oversight and how AI can unintentionally reinforce biases in hiring if not designed ethically. What I can tell you is that is most of our customers insist on, and I think wisely insist on, a human touch point to ensure that that person is the right kind of person for their organization, knowing full well that humans are also not
perfect evaluators and perfect fair people. But that is where we are there. Where we are seeing rapid AI adoption and rapid AI impact is in everything prior to the hiring decision. So, you know, collecting things like people's availability and understanding how people describe their ideal schedule and availability is an AI problem.
figuring out when a recruiter is available to interview and understanding when someone needs to reschedule and providing new availabilities. Those are all things that AI can have a deep impact in. Understanding, you know, asking questions and evaluating and all kinds of things that go in that you can use AI for. But when it comes to the ethical side, like making the final decision rests with the human being. You alluded to the fact that
While AI can seem scary, certainly when it is used to automate the hiring process, we always need to remind the community that all AI is a data problem.
and it's perfectly designed to replicate human bias. So every time we decide that a task is not appropriate for AI driven automation, we need to confront perhaps an underlying issue that where did the AI learn to be biased? Is that a conversation that you have with the team? There's some amount of responsibility that you have to take for the underlying data.
Yeah, it is. Absolutely. I think there's two interesting things here that you're talking about. One is the underlying data set and does it have implicit or explicit bias built into it? The other one is the alternative, right? And I always found it interesting in the autonomous vehicles category, right? An autonomous vehicle would make a mistake and there'd be some interaction with a human being and people would be
over the moon about it and up in arms about how could this autonomous vehicle make such a bad mistake driving on the roads. And then on the flip side, you have human beings who are also not wonderful at driving. I think, I don't know what the statistic is, but like 75 or 80% of people consider themselves to be an above average driver.
It's hard to have 80% of people believe that and actually have it be true. So I think you have two things. You have the underlying data set. And then I think it is also going to, I believe, force a really, whether or not, this is maybe a prediction for your listeners. We'll see how I turn out on this one. But I think the conversation around ethics in AI, specifically vis-a-vis hiring,
one of the byproducts of that discussion may be a real introspection into what human biases and processes are already impacting the hiring experience. So even if you're not going to implement any AI hiring, the very fact that you're talking about bias in the hiring process may lead some great HR professionals to say, wait a second, while you guys are worried about whether the computer is biased,
I also want to look at our current process. Like, what are we doing today that is biasing our employees and biasing our process of hiring? And I'll give you a great one, right? Like, you could argue requiring a resume for all jobs, even with the best of intentions, is leading to some bias, right?
And so I think it's a healthy conversation. I think it's going to be a real healthy conversation for everyone, regardless of how deep into this kind of AI ethical question you get. Sean made it clear.
AI can make hiring faster, but not necessarily fairer, which raises a crucial question. How do regulatory bodies ensure AI hiring remains fair and compliant? To answer that, we turn to Keith Sonderling, who at the time of this interview was serving as one of five federal EEOC commissioners, now as United States Deputy Secretary of Labor. He continues shaping federal policies on AI and employment discrimination, and
Commissioner Sonderling has published extensively on the risks of AI bias in hiring and has worked on legislation to ensure AI-driven employment decisions align with civil rights laws. In this segment, he breaks down what companies must know about AI compliance, how to audit automated hiring tools, and why ignorance of the law is no excuse.
So as we've learned on this podcast from mutual friends like Art and Juliet and Thomas Otter and Josh Bersin and John Boudreau and other legends in the field, increasingly, AI is becoming pervasive throughout the HR process. Essentially, every domain from selection through to performance evaluation, talent management, learning journeys, and yet, as I referenced in the AI Fun Fact, that does not absolve
employers from complying with the underlying laws. What do you see as the risks of these technologies automating tasks that have always been the responsibility of humans? Well, you so eloquently talked about how AI is all over the place in HR and even more. It's every function of HR, A to Z of the employee-employer relationship. There is some AI out there.
as you know, promising not only to make those employment decisions more efficient, more economical, based upon data, taking a skills-based approach to hiring, all these things that we've heard constantly. But what has been so important for me here since I've been an EEOC commissioner is to make AI and HR one of the top priorities of the agency. Before I answer your question, I just really want to
give light on why I spent my time as commissioner focusing on HR technology over everything else we have to deal with. Because in HR and in compliance, just like the EEOC, you are constantly putting out fires. You are constantly going in different directions depending on what's going on out of your control in the news. So just a really quick, brief history. If you look about
from a compliance perspective. After the recession in 2008, between 2010 and 2012, there was a huge amount of reductions in workforces. And older workers were disproportionately impacted by that. So then there was a spotlight on age discrimination and how we make sure older workers can stay and come back into the workforce. Then the #MeToo movement happens.
And all resources have to go related to sexual harassment prevention. Then the US Women's Soccer Team with pay equity. Now everyone's talking about pay. Of course, all these things have longstanding laws from the 1960s. None of it's new, but the focus changes. And then COVID. And then it was about accommodations and vaccine mandates and George Floyd. And it was about racial injustice in the workplace. So there's always going to be that significant distraction, not only for you as HR professionals, but us here at the EEOC of where we have to spend our resources.
Knowing that, I really want to say, how are we going to get ahead of the next Me Too movement? How are we going to get ahead of the next catastrophe that HR professionals are going to have to deal with? That's where I started diving into this when I realized how prolific it was and how many different options there are for HR professionals, whether on the talent side, the management side, the accommodation side, you name it, to have AI make those decisions, not just more efficient and economically, which I just said, but
but also to remove bias and thinking that these artificial intelligence tools can be
be designed to eliminate the biggest problem in HR, which is the humans, which have caused the bias, which is the reason my agency exists and continues to exist. In the last two years, forget about robot discrimination, which we'll talk about in a moment. Let's just talk about the state of everything. I told you there's 80,000 cases every single year. That's increasing. In the last two years, we've collected $1.2 billion from employers for violating these laws. Right?
before we even know about these AI cases. So there's already an issue here. So you have a lot of very smart people, much smarter than us, that can design this AI with that in mind. And I've taken the approach that if AI is carefully designed and properly used, it may be able to help us eliminate some of that human bias. But at the same time, you could just flip what I said and
If it's not properly designed or if it's not carefully used, which are two separate and distinct concepts we can talk about because some is on the vendor and some is on the corporation using it, that it could potentially make discrimination larger and scaled.
than any one individual can do. So after I dove into it and realizing the pressures that everyone in HR departments, that everyone who's designing, developing, and deploying these products face, it's like, okay, well, how do we set the guardrails around this? How do we now talk about as an agency, knowing that this is where HR is going, how do we discuss what our role is? How do we discuss what the law is? And what I found there was significant, massive confusion
about the law. And that's why I've been so aggressive in the HR tech space. That's why I've been so aggressive in the HR world to make this boring, to simplify this. And let me tell you why. Yes, there's a lot of new legal proposals and it's causing confusion of what you can use and what you cannot use. And I'm happy to talk about some of those. But at the end of the day, I've simplified it in saying there's only a finite amount of employment decisions, hiring, firing, wages, training, benefits, promotions, right?
And your company is making those decisions whether you're using AI or not. But that's what the law regulates. And it has regulated since the 1960s. At the end of the day, whether you're using these tools to completely make that decision, whether you're using the tools to augment that decision, or any of these other buzzwords we're hearing, right? Assist, human in a loop, all that.
That doesn't matter to us. What matters to us is the employment decision and whether or not there's bias in that employment decision. And that is what we're going to look at. And that's what our laws apply to is the employment decision. And AI hasn't come up with a new employment decision yet, right? So in a way, how do we simplify this as a law enforcement agency, a regulatory body that is responsible for now diving into this? We have to look at it the same way we've looked at everything else. And is that
decision the employer made based upon the business need, based upon merit, all these lawful factors, or did bias come into play? And I've argued that AI can help make it better and can potentially make it worse. It all goes back to the design and the use.
Commissioner Sonderling highlighted an important point. Compliance is not optional. AI hiring tools must be transparent, explainable, and unbiased, but ensuring responsible AI adoption doesn't stop at compliance. HR leaders must take an active role in shaping ethical hiring practices.
To understand how HR professionals can balance innovation with responsibility, we turn to Guillermo Correa, former managing director of HRM's Workplace Innovation Lab. The Society for Human Resource Management influences HR policies worldwide, with over 300,000 members in 165 countries.
Guillermo helps HR teams adopt AI in a compliant and ethical way, ensuring AI-powered hiring solutions serve both employers and candidates fairly. Listen as he shares how AI is already transforming hiring and the steps HR leaders must take to keep AI fair and beneficial. Shifting from fax machines to AI, I'm not sure there's a good segue, but increasingly, AI is being...
used to automate a lot of functions in HR. And it's an interesting use of technology, but I would posit there are some potential downsides of having AI screen resumes or make decisions about who gets hired, promoted, fired, etc. How do you think about the ethics and some of the opportunities
to introduce automation into the workplace but also some of the challenges yeah that's a really really tough issue to think about you know one you know one of the things that that we're going to start looking at is um the building of the ai right because
people are the ones that are creating the AI. So what biases could software developers, for example, be putting into the AI? I'm sure you've heard about algorithms not being too DEI friendly or diversity, equity and inclusion friendly. So I think that that's really
the first step of looking at that. I think that once people start getting comfortable with using that technology, and as long as they're able to trust the technology, then I don't think it's going to be a problem using it. As a matter of fact, I think it's going to make a lot of stuff a lot more strategic or more efficient. I should say I could totally foresee a lot of
HR manual processes right now going away because of AI. So first you have to really gain that trust in order for people to start feeling like they can use the technology. And I'm going to give you an example, actually. So I'm a big tennis fan. And one of the things that I began to notice last year was that at the tennis tournaments, they no longer have line judges.
right? But it took a few years for the players to get comfortable with trusting that all these cameras and systems that they've now implemented at stadiums, that they are being truthful in the line calls that they're doing, right? So I think it's a similar situation in the workplace, right? I think people need to just
Once they start trusting the technology, then I think you're going to see a lot of great things happening. Every sport, not just tennis, right? There are the purists. I'm thinking of baseball.
Yeah, or soccer actually. I mean, so in last year's World Cup, right, I think it was the first time that they implemented the VR for the offside calls, right, as well as the VR for goals. And as a matter of fact,
During the final game, Argentina was given a goal after the VR proved that it had been a goal. Amazing, amazing technology. I call that progress. What do you say to all the purists, whether they're HR purists or tennis fans or soccer fans who say,
That's not the way the game was meant to be played. Honestly, they need to jump on the wagon or else they're going to be left behind. And I actually have a perfect example for that. And that is the HR blockchain. Right. I think that that's one of those things.
that is truly a disruptive technology for the workplace in my mind. I view, no offense to background companies out there, but I view them as dinosaurs now. Once this blockchain gets really, really going, there's not going to be a need to have to contract out to a background company to verify a new employee joining your organization, right? You know, if
you know, if you have somebody who's already been verified by other companies as having worked there, as having the skillset that they say that they have, as having the degrees that they say that they have, right? Why do you need to go back and recheck or re-verify all that information, right? There's no need for it. How would that user's record on the blockchain get updated as their profile changes?
Well, it's really interesting that you're mentioning that because I did an interview earlier on today where the same question came up. And so you're going to have the employee or the person
having the ability to be able to update their own credentials. Right. But at the same time, there's also at the same exact time, there's also going to be the verification that's going to be happening. Right. So, for example, I have an MBA from Cornell. If I put down somewhere that, hey, I got an MBA from Cornell,
Right now, the way that it works is that you have to go out to Cornell and verify that I have that information. But if I have a digital credential that already has that verified, then it's just a matter of the other organization plugging into the same network and seeing that it's been verified by Cornell. Or if it hasn't happened yet, then think of...
I'm plugged in, the organization's plugged in, Cornell is plugged in, right? And so then it all happens instantly where I'm going in, I'm saying, hey, I have an MBA from Cornell. There's an automatic ping out to Cornell. Cornell says, yes, he has an MBA from us. And then that ping goes back to the organization telling them that it's been verified. So that's improvement.
Absolutely. I often say you never want to be on the wrong side of innovation. Absolutely. Throughout history, even going back to the Industrial Revolution, the ones who decided to break machines, the Luddites, they were on the wrong side of innovation. Same philosophy holds today. Absolutely. Yep. Yep. No doubt about it.
We've heard how businesses are integrating AI and hiring while staying compliant. But what's next? Will AI-driven decisions earn employee trust? Or will ethical concerns slow adoption? To close this discussion, we turn to Josh Dreen, a workplace futurist and co-author of Employment is Dead. As the co-founder of the Work3 Institute, Josh has dedicated his career to helping organizations build a human-first AI culture,
He believes AI isn't here to replace jobs, but to help humans enjoy them more and make informed decisions. In this final segment, Josh explores why automation must be balanced with human oversight and how AI can unintentionally reinforce biases in hiring if not carefully designed. AI is very complicated for this generation that we're talking about. On one hand, it's a tech-first generation.
culture. It's a tech first generation. So they embrace technology in their personal lives and their work lives. And yet, when we talk about it in the context of work, it can also be a threat. How do you reconcile that complicated relationship at work and in life that this next generation is bringing with them into the workplace? Yeah, AI. Yeah.
I don't want to downplay the complexity and importance of AI in the transformation of the future of work. But I also just want to simply state where I stand and what I observe is going to happen here, right? Broadly speaking, we have had so many technological disruptions, even in our lifetime, that has changed the game. And every time that happens, you have...
the laggards, those who don't want to learn the technology, and you have the adopters, those who learn the technology, bring it into their daily lives, and they do just fine. That's the simple answer. It's like, AI is here to stay, learn how to use it in your daily life. And when you do that, you'll start to realize its limitations very quickly and be like, oh, this ain't that bad. Or you might start to realize there's a lot more learning that you have to do
But if you're not learning it, someone's going to take your job who knows it. And so I think of the calculator, right? I guarantee there was probably a time where accountants were counting by hand and they had a bunch of people just counting. That's all they did. And that was job security. And that's what they liked to do. And then the calculator came out and one person could do that job. It's like, oh,
And you look at the emerging generation who's like, okay, we really had people who were just counting. I don't understand, right? That's the technology that evolved and AI is the same way. If you have a job,
that can be done by AI. I spent a lot of time in the marketing world. If you can get really great copy for a website from AI and given you're not, you're never going to get a hundred percent there with AI. You got, there's still that human touch and it's going to be even more important in the future to keep that human element there. But dude, if you, if you had a job where a lot of the copywriting that you were doing can be outsourced to AI and you can do it a lot faster and get closer to it,
And evolve. What is the skill that you need that is going to be able to 10x your productivity? Or how can you use AI to do even more and do better, right? Those are the questions that we should be asking. And so for any of you out there who are like, oh gosh, AI is coming to take my job, I'm willing to bet you haven't used it enough or you're in a role that can evolve and should evolve. And let's figure out what that looks like and feels like.
we're all pushed to be our best all the time whether it's by other humans whether it's by technology and we have a vested interest in doing our best work and if that means supplementing what we're naturally great at whether it's through a calculator or through a large language model i just i believe in the you know the resilience of the human spirit and and
we get better by learning from the best. And to me, that's just part of the human journey. And I like the way you said it. Yeah. But tell me, Dan, you obviously spend a lot of time thinking about this. Do you feel like I'm oversimplifying it? Is there an ethos that you have adapted that would enlighten me? I'm curious what your thoughts are. I bristle when I hear us
call AI by names like neural networks, or we talk about the quote digital brain, because it feeds on the...
you know, innate fears that we have of the other, the technology, the bot, the bot apocalypse, you know, popularized in science fiction movies and things like that. When in fact, I think that really misses the point. These technologies, which are, you know, developed by well-intentioned, amazing humans to help us be more productive and cure cancer and, you know, eradicate famine and really solve global problems like
We're in a special place in time that we can do amazing things with these technologies. And so I don't want anyone to feel threatened by them. We are smart enough to develop these technologies. We're also smart enough to know how and when to use them responsibly. Yes, I'm an AI optimist, but I do want everyone listening to get the message that this is a time to think about AI.
nuanced relationship that humans have with machines. And go back all the way to the first industrial revolution,
18th century, you know, in Great Britain, there were Luddites who were, you know, let's just say on the wrong side of technology. Didn't work out so well for them. We're on the cusp of another transformation. Often this is called the fourth industrial revolution. But again, we need to confront, you know, this complicated relationship that we have with machines and not run away from them.
You know, not not fear innovation, but embrace it throughout history. You know, embracing innovation has always been what, you know, the ones who end up the most successful, the happiest, the most productive. That's always the decision they consciously make. Yeah. So me and my soapbox, Josh. But yeah, that's the message I want. I want more people to embrace.
Yeah, it's hard to as well because of how quickly things are moving and adjusting. There's a lot of talk about AI is going to very quickly put us in our silos. They're very quickly going to have us pushing the button and doing the thing that we need to do as humans in the relationship. There's so many things that we could be worried about that worrying for worrying's sake is not going to fix or solve. And so I would just say, you're right. This is an absolute opportunity for those people
entrepreneurs and those free-thinking individuals to be a part of it, to help shape it. And yes, there are things out of our control that we need to figure out how we're going to think about it. But at the end of the day, I think it's this age-old problem that humans have been facing, which is how can I provide value in a way that makes sense to my community?
From the risks of AI bias to the need for compliance, from HR's role to the future of hiring itself, we've covered the full spectrum of ethical AI in HR. So what's next? The challenge isn't just building better AI tools. It's ensuring they work for everyone. Companies that prioritize transparency, fairness, and compliance will be the ones that truly harness AI's potential.
Thanks to our great guests on this special compilation episode of AI and the Future of Work. If you enjoyed it, keep the conversation going. What do you think about AI and HR? Share your thoughts in the comments and help shape the future of responsible AI.