Recruitment. For many, it can be cold, functional, or lacking that personal touch. But LHH believes it should be more. By connecting people to opportunity, not just skills to roles, beautiful things happen at work. A leader inspires. A team grows. The people you hire develop into the people you admire, making 90,000 hours of work in a lifetime time well spent.
Recruitment, development, career transition. LHH, a beautiful working world. Discover recruitment solutions at lhh.com slash beautiful.
On WhatsApp, no one can see or hear your personal messages, whether it's a voice call, message, or sending a password. To WhatsApp, it's all just this. So whether you're sharing the streaming password in the family chat or trading those late-night voice messages that could basically become a podcast, your personal messages stay between you, your friends, and your family. No one else. Not even us. WhatsApp. Message privately.
Artificial intelligence is already being used in a number of decision-making systems. California is trying to set up safeguards. From American Public Media, this is Marketplace Tech. I'm Novosafo.
AI is deployed in what are known as automated decision systems, or ADS for short. Their actions can impact people's everyday lives. The systems might play a role in deciding whether someone qualifies for a bank loan or whether an unemployment insurance claim is legitimate. In California, the state Senate has voted in favor of a so-called AI Bill of Rights. It would establish new guardrails around these systems. To learn more about them, we turn to Kate Brennan, Associate Director of the think tank AI Now Institute.
Before launching, a developer must assess their technology to understand the social and technical outcomes of the ADS before it can affect people's lives. So in other words, like the developer is doing its due diligence to ensure their product works as intended and has effective safeguards from discriminating against people, which of course begs the question, why would a developer not do this in the first place?
Second, the bill institutes disclosure requirements. For example, if a bank used a high-risk ADS to reject someone for a loan, that person must be notified that an ADS was used and be provided with the opportunity to appeal that decision for review by a person. Another disclosure provision in the bill is that companies or government agencies using ADS must make it publicly known that they're using the ADS on their website.
And then there's a third bucket, what the law calls a governance program, but we might think of as ongoing maintenance. So these provisions ensure that the ADS is regularly reviewed, updated, and continues to align with existing standards.
And now that's a pretty broad set of requirements. And often how California goes in terms of setting regulations and safeguards, et cetera, for industry tends to affect the nation. And could we see the same thing happen here where as does California, so does the rest of the nation when it comes to AI?
Yeah, sure. We could, you know, as you said, California has a long history of ushering forward consumer protection laws that have positive ripple effects across the states. And if we take a step back, this is all the more important right now, given the broader legislative context that we're in. Currently, Congress is trying to usher through a budget reconciliation bill that has a provision banning states
from passing any laws regulating AI. And it specifically calls out state laws regulating automated decision systems. So bills like SB 420 would be on the chopping block. And even if the moratorium fails, Congress has failed to pass any legislation protecting citizens from the harms of ADS. So
If a bill like 420 raises the bar for common sense provisions across the states, that's good. You know, at the same time, weak bills can be just as harmful as nothing at all. And SB 420 is not perfect. It leaves a lot to be desired. So, you know, we should always be fighting to making sure the strongest consumer protection bills are making their way through the states. We'll be right back.
Imagine what's possible in your business career when learning doesn't get in the way of life. At Capella University, our game-changing FlexPath learning format is available in select business programs and lets you learn at a time and pace that works for you.
That means you don't have to put your life on hold while earning your business degree. Instead, enjoy learning your way and earn your degree without missing a beat. A different future is closer than you think with Capella University. Learn more at capella.edu. You're listening to Marketplace Tech. I'm Novosafo. We're discussing California's proposed AI Bill of Rights with Kate Brennan, Associate Director at the think tank AI Now Institute.
California Governor Gavin Newsom late last year vetoed a comprehensive AI safety bill. And the arguments for that veto were things like, you know, it goes too far. It crimps the innovation from AI companies, which is still a very nascent industry. And potentially, if you just have regulation in one state, people just move to another state. So could those same arguments apply to this bill? Does it make any sense to have a state regulation
regulating something so expansive as AI, which is really a global phenomenon.
Yeah, you know, Governor Newsom has used this argument and companies and AI industry certainly come out and use this argument, which is a variation of the ripple effect or patchwork legislation is going to make it harmful and hard for AI innovation. But I think we really need to see these for an attempt to undermine consumer protection efforts and avoid accountability. And, you know, we can rebut these specifically when it comes to 420, you know, the
Most literally, bills tend to specify a threshold for when the laws kick in. So often this garage innovators will be harmed argument kind of comes straight out of a lobbyist.
playbook for bigger players who want to, you know, use this argument in cynical ways. But I think more fundamentally, you know, if you're in the business of making a decision that affects people's lives materially, like rejecting someone from renting an apartment, it shouldn't matter if you're big or small and everyone should be afforded these protections, Californians and everyone across the country alike. What do we know specifically about the places where these automated decision systems are being used now?
Absolutely. You know, these systems have been used for over a decade. This is not new. And we have, you know, over a decade of evidence for people being subject to these decisions. I'll give you some tangible examples. Medi-Cal, that's Medicaid in California, Medicare, using automated decision systems to determine eligibility processes for health insurance claims, eligibility for Social Security benefits.
unemployment insurance through, uh, EDD in California, uh, setting bail or even police departments using technology to predict crime. And that's all from the government. We also have private actors who are deploying ADS systems. So banks issuing people loans, housing rental companies, as I said, approving an application. I could go on and on because, uh,
People are really subject to ADS in many, many areas of their lives. And as we're seeing a broader push from the AI industry to push AI systems into every corner of our lives, increasingly more people are being subject to ADS systems being used on us rather than by us. And a lot of times the use of these systems is...
not disclosed, right? People don't know that it's being used to make decisions about them. That is...
is exactly right. Part of what makes ADS so dangerous is that we tend to, A, not know when they're used on us, but B, we know very little about how these tools work. And private companies and government agencies tend to hide behind trade secrecy laws to protect their technology from public scrutiny. So disclosure provisions like those in SB 420 and other bills that are being debated right now, even in California, are
have mechanisms that give the public, like journalists and advocates and civil society, the ability to see what data goes into training these tools. And that's a very meaningful accountability measure. That was Kate Brennan at AI Now Institute. The National Conference of State Legislatures says in recent months, nearly 30 states have passed some kind of AI-related legislation. Jesus Alvarado produced this episode. I'm Nova Safo, and that's Marketplace Tech. This is APM.
Hey there, it's Ryan, co-host of Million Bazillion, a podcast that answers your kids' big questions about money. From slam dunks to home runs, some professional athletes make a lot of money. Because it turns out, if you have a star player, everyone wants to pay you more. This week on the podcast, we keep score of how pay works in the world of sports. Listen to Million Bazillion wherever you get your podcasts.