Welcome to a special edition of AI and the Future of Work with Dan Turchin from PeopleRain. We've published more than 300 episodes, and we get asked all the time to highlight a few of our favorites. We listened. Today, we're bringing back short clips from a few of the best conversations with AI leaders who are defining the future of AI ethics. We'll publish more clip episodes in the future to make it easier to meet amazing past guests. ♪
Let's kick things off with a compilation episode to celebrate Data Privacy Day. Observed every January 28th, Data Privacy Day is a global initiative to emphasize the importance of protecting privacy, safeguarding data, and fostering trust.
For AI enthusiasts like you, it's also an opportunity to reflect on the ethical implications of the technologies we're building and using every day. Today, we'll revisit four exceptional conversations with leaders who are shaping how AI intersects with data privacy, cybersecurity, and ethics. You'll hear insights from Vijay Balasubramanian, Ray Wong, Dr. Zohar Bronfman, and Holger Mueller.
Visionaries who are not only experts in their fields, but also passionate about the responsible use of technology. Let's dive in and learn how we can prioritize privacy in the age of AI.
Our first guest is Vijay Balasubramanian, co-founder and CEO of Pindrop, a leader in voice security technology. Vijay is a pioneer in combating voice-based fraud and deepfake threats, holding multiple patents in VOIP security. He's spoken at major conferences like RSA and Black Hat. And under his leadership, Pindrop has raised over $200 million to fight fraud and enhance trust in voice technologies.
Here, Vijay shares the alarming rise of deepfake attacks and the innovative ways we can use AI to restore trust in communication and commerce. I remember when we started Pindrop and I was talking about voice and voice security, everybody was like, oh, you're securing phone calls? Oh, the phone call is going to go away.
And I had to keep coming back to people and saying, yeah, the phone call might go away as it exists right now. But we as humans communicate with voice and that's not going in any place. And that's what we want to secure because there's so many ways in which people can mimic your voice or spoof your voice and spoof your device and things like that. But
Back then, it was really hard for people to understand why voice would be important.
And then you had Alexa, which, you know, completely changed in the way we thought about interfaces between humans and machines. And then with chat GPT, and if you look at GPT-0 right now, and the way you can actually have a conversation with an assistant, it's completely different. And so now it's a whole lot more easier. But back then, you know, everyone thought we were stock raving mad.
So much anymore. So half the world is going to the polls this year. And I believe that deep fake voice calls had a watershed moment. You're the expert, you know better than me, but it became very high profile earlier in the year with the primaries and Biden's
voice being spoofed to try to prevent potential voters from going to the polls. That was a wake-up call. It's prevalent. It's easy to do. The attacks are only going to increase, let alone the business side of deepfake detection. But what would you say the impact of that has been on raising awareness about the potential threat?
Oh, it's been really significant because, you know, everybody talked about, you know, the fact that there could be election interference due to deepfakes. And at the beginning of an election year, on January, you had a deepfake.
And, you know, you had a deepfake of not anyone. You had a deepfake of the president of the free world. It couldn't get more high profile than that. And what that has allowed, you know, a lot of people to do is understand, you know, all the nuances of deepfakes, right? Like the fact that, you know, we've been doing deepfake research for eight years. So when we started, it was just one tool. There was a tool called Liarbird.
At the end of last year, there were 120 voice cloning apps. By March of this year, there are three, eight voice cloning apps. So that explosion is one. The second, I don't know if you remember, there was a point in time where John Legend became the voice of Google Home. And
And, you know, he had to go into the Google Studios record, like 20 odd hours of him speaking so that, you know, Google Home could say, when you said, what's the weather in San Francisco? You could hear it in John Legend's lilting voice. Oh, it's a balmy, whatever, 26 degree sail, whatever, like the answer.
But it took him 20 hours to get Google Home to reproduce that voice. Right now, these voice cloning tools can reproduce your voice with five seconds of your voice and to get a high quality version that can fool your spouse, your kid, you need 15 seconds. So to go from 20 hours to five seconds, to go from one tool to 350 tools,
Just being able to showcase all of that, that's what, you know, that Biden robocall allowed folks like us to do. And then to say that you can actually have good AI to beat this bad AI. And what does that mean? That was the third piece of the puzzle. Okay, so true story.
feels like a therapy session. So last week, my mom calls me and of course, I'm her IT guy. And she says, I just got a call from Microsoft saying my office subscription is about to expire and they need my credit card number. And thankfully, my mom's smart enough to have been a little skeptical and said, hey, you know, what should I do? There are a lot of people who may not
question, that phishing attack or the phishing attempt. And my question for you, maybe this is naive, but I really don't understand at what point does pin drop insert itself so that, you know, at the moment of truth, when my innocent mom is about to give up her credit card number, at what point can pin drop intercept that, detect that it's a deep fake and do something about it?
Yeah, so technically we're actually there, right? So right now when we, so if you think about deepfakes, they're breaking trust in commerce, right? Every enterprise in a remote call doesn't know that you're on the other end. And, you know, is it really even a human on the other end? And we're seeing those attacks. Just to give you a sense of the attacks, last year we were seeing one attack every single month that used generative AI.
This year in the first four months, we're seeing one attack every single day per customer. So there's been a 450% increase in deepfake attacks in commerce in just this year. And the year is just getting started. So...
Deepfakes break trust in commerce. They break trust in media like we talked about in the news media and the social media. We talked about the Biden robocall, but there's been instances of Tom Hanks selling dental plan ads and things like that. And then communication, right? Like your grandmom getting a call, that's already happening. The grandparent scam is one of the most nefarious scams because they're using security.
simple TikTok videos of your grandkids, you know, just three to five seconds. And then they're coming in and not only are they coming in and targeting your grand, but they're doing it in very specific ways. They're going after very specific counties, you know,
They're hitting that county, all the senior citizens of that county from Friday to Sunday. And then on Monday morning, all of them are showing up at the local law enforcement place saying, I lost $20,000. I lost $200,000 because I thought I was speaking to my grandkid. I thought he was in trouble.
I thought I was giving him $3,000 to get him out of trouble. And so that entire thing, I don't know the last time a particular technology has broken trust in commerce, in media, in communication.
And so we've started off on the commerce side of things, but we absolutely want to be present when your grandmom is getting a call to say, this is not your grandkid. This is actually a machine on the other end. This is AI generated, but it's going to take a while because getting into the core telephony infrastructure requires participation from the handset manufacturers or the carriers, right?
And, you know, as these attacks become more and more prevalent, the consumer is going to demand that. More importantly, the consumer is probably going to go to a carrier or a handset manufacturer that protects them. Or you as a grand, as a son or a grandson are going to say, I'm going to buy you a service where you're protected against this, that has these kinds of capabilities.
But that takes time. And unfortunately, I wish we could do this faster. I'm super impatient. I'd love to get this technology to all of these places. But the problem is until people get hurt, and this is like this awful truth that we live in, right? Like until the attack actually becomes bad enough, people don't take action.
Next, we turn to Ray Wong, founder and CEO of Constellation Research. Ray is one of the most respected voices in enterprise technology, with decades of experience analyzing how AI is transforming industries. He's also the host of Disrupt TV, which reaches over 130 million impressions monthly. In this segment, Ray explains why decentralization is critical for building trust in AI and ensuring secure data sharing across industries.
I actually think the best example of AI is human intelligence. It's super decentralized. People are learning at different rates. They have different skills, different powers, different abilities, right? That variability is actually what makes the collective human intelligence so powerful. To recreate that in a centralized model defeats the point. It's got to be decentralized going forward because
there are different tasks, the different probabilities. If you really wanted to take a model out there and say, every human is really a source of energy with the source of probabilistic model that is sitting here, that's superimposed. There's a supposition on top of everything.
Then you'd be like, yes, it's the randomness of all this that actually creates that collective intelligence. I know I went super deep there very, very fast. But my point being is just saying that this is a super intelligence and it's going to be all knowing is not enough. We actually have to factor that
There's gonna be so much variability and so much choice. There's this human desire to make everything conform and centralized and we have to fight that, right? The centralized notion of scarcity versus a decentralized notion of abundance is really gonna be the battle about war we see intelligence.
So you and your team are paraded into every vendor briefing, every big tech vendors briefing. They want you in the room to talk about their vision for AI and typically kind of finger quotes what could go right with the technology.
The other half of your role is talking to enterprise leaders. Be curious to know from kind of the catbird seat that you sit in, is there a difference between what you're hearing from the vendors about their vision for AI and maybe what the enterprise leaders are expecting from those vendors? You know, that gap is really the difference between billions of dollars of revenue. And I think that's an important point that you point out, Dan. The vendors that know what's going on have
issued out an amazing vision saying, here's the future. And they've also figured out the on-ramps for their customers to be able to participate in that process. The ones that haven't are the ones that will fail in the marketplace. So there's a certain amount of inertia that's required to be successful by the vendor. But there's also the end user consumption of this that actually pays the bills. That's always been the product market fit, and that's the gap.
I'll give you an example. I am going to venture, and I may regret this later since it's recorded and broadcast live, but that's what makes this fun. So I would venture that billions of dollars are about to be wasted in AI, mostly because
Organizations don't know how much data they need to get to a level of precision that their stakeholders will trust. So I'll give you an example. 85% in customer experience, I'm okay with that, right? A call might go bad, you might be upset at someone, but you're getting it better than 50-50, which is better than what I have now.
85% accuracy in supply chain? Ooh, that's bad. That's like millions of dollars per hour that could be lost or tens of millions of dollars per hour that could be lost because your supply chain needs to be up and running at about 98, 99%.
85% accuracy in finance, somebody is going to jail. That's not gonna happen. 85% accuracy in healthcare, okay, yeah, you've already hit the limit. And so we have this notion that the data that you have and all the publicly available data that's out there that's being scraped on the Internet.
There's nothing new after that, right? Most of that data has already been scraped. And so the future of these large language models that are powering generative AI, which has everybody all excited, is that we've got to get to the smaller language models and the very small language models. So the next 10% is just as valuable as that first 85%. And that last 5% is going to be as valuable as the 95% in terms of getting to a level of precision.
And so that means we're going to enter a world of data collectors where we share data, we broker data, not in an industry, but across value chains. So retail, manufacturing, and distribution are going to share information on supply, inventory, stock, elasticity of pricing, demand, interest.
So that's going to be one. Comms, media, entertainment, tech, telco is going to be the same thing. They're all going to be selling. They're all going to be figuring out what customers and what personas are interested in a digital good distributed on a technology platform that actually has a digital monetization backbone on the other end. And so that's going to be how we share data, share information, have a give-get model that's going to drive that.
And then the other interesting piece is most organizations are going to put something out there, don't realize that the most important question out there is who do you sue when something goes wrong? Have you thought through the liability aspects on the back end, right? And so all that's going to come into place. And of course, when we take it back to the future of work, the smart organizations are going to realize when and where do we insert a human.
Our third guest is Dr. Zohar Bronfman, co-founder and CEO of Pecan AI, a company that's transforming predictive analytics with conversational AI. With two PhDs in computational neuroscience and philosophy of science, Dr. Bronfman is a thought leader in both the technical and ethical dimensions of AI.
And this excerpt, he reflects on the ethical challenges of predictive analytics and why AI must focus on complementing human intelligence, not replacing it. I would go even further to say that AI is not necessarily really artificial intelligence. You know, as a society, we term things, we give things names and terminologies, but we
It doesn't necessarily mean that the fact we call something a neural network or we call it an artificial intelligence means that this is what it is. It's a catchy name. It's a great name. Philosophically, it's not artificial intelligence.
And same computationally, it's not a neural network. The analogy breaks very, very fast and very early. I'll just give a couple of examples. The complexity, just for us to understand how complex the brain is and how complex human intelligence is, I would go and argue that the neural network, even the most complex, biggest,
neural network with recurrency and convolutional windows and rugs and whatnot have far less complexity.
and far less degrees of freedom in the way they are being shaped than the brain neural networks. Not to talk about the computational affordances, the embodiment, and many other elements that the brain networks have and the algorithms do not. And like I said, we can take it further even to artificial intelligence,
People have been talking and deliberating and debating about what intelligence means for probably as long as humanity is around. We won't solve the final definition of intelligence today.
now, but I can definitely say, and I'm sure the vast majority of the community will agree, intelligence is far more than just being able to reproduce an answer to a question or to summarize a piece of content or write a poem.
intelligence is first and foremost about problem solving. Now, it's a very wide term, and again, we can argue and debate about what problem solving means, but in reality, from that perspective, it's not, I think, personally, it's not really full-blown artificial intelligence. It's actually far from it. It doesn't mean that it can't give a ton of value
if it's being used in the right way, it can, and it will, and it is. And more interestingly, in my mind is that, and that's how I think about machine learning and AI in general, I actually think that the interesting part is not to reproduce human capabilities. It might be interesting from a research perspective because there are interesting implications, even therapeutic implications, if you're able to emulate human thinking,
But in reality, the interesting part, especially when you think about it functionally and from a productivity perspective, is to actually implement things that humans can't.
can't do well. So I don't think we should chase at the Turing test, in my mind. I think it's actually a wrong test for guiding progression when it comes to artificial intelligence. I think we should progress at the vector of anti-Turing test. What are the things that we are the worst that the machines can actually excel at
so that they complement our capabilities and really generate a dynamic of better together. I can give you just one example to make it intuitive. As humans, we are terrible.
at extracting patterns of numerical series, like series of numbers. If you would look just at a series of stock price numbers, you won't be able as a human to see anything in it. So if you see a series of photons that hit your eye, you are immediately able to recognize even the most nuanced face.
We're very good at translating light into a figure. We are better than computer vision in 99% of cases. But when it comes to looking at a series of stock numbers, we can't even grasp the simplest of rules. We don't see it.
Because evolutionary, that wasn't the way our brain was evolved. There was little pressure on identifying patterns in numerical time series. And there was a lot of value in identifying nuances in facial expressions. The machine, however, is amazing at identifying patterns in numerical series.
That's, for the machine, even easier than identifying minor facial expressions. So I'd say if we are looking at benefiting human activities and making people more productive and help, you know, basically increase capital and productivity like the classic theory implies,
then running mostly around the vector of improving things humans are poor at is going to be more lucrative than focusing on quote-unquote substituting for capabilities we already possess.
Our final guest is Holger Mueller, Vice President and Principal Analyst at Constellation Research, specializing in the future of work and human capital management. With over three decades in enterprise software, Holger is a trusted voice on how AI can enhance enterprise operations responsibly. Here, Holger explores how ethical AI can build trust in the enterprise world and why transparency is key to adoption. Our enterprise leaders are
concerned about whether or not to make the investment in AI and AI adjacent technologies because of concerns, IP, safety dangers, data leakage, things like that? Or is it really a matter of how we have decided to make the investments and we're looking for your guidance about what investments we should be making?
So they're both, of course, like both are real situations. The interesting thing is what I found and I've seen this like this in the industry is that the fear around regulation, ethical AI, safe AI and so on is largely has been promoted by the ones who didn't have AI yet. Right. I mean, the example on the hand talking about the large guys, the other people are doing this as well.
was Microsoft, which deployed not only one, but two lobbying teams for responsible AI in Washington and managed to scare Google so much that Google held back what they then had to catch up with, right? It's one of the biggest, if you look like history, Machiavelli, Duce, right? How to get your aristocratic leader in place. What are the tips and tricks of the trade? I mean, that's the latest high-tech move. When I retire, I want to write the book about the Machiavelli moves and the high-tech experiences.
So that's one of the most impressive ones. And just for the people who don't know, one of the regulation teams was let go as soon as the partnership with Oakley was unveiled. So there's a lot of, on the one side, fear-mongering and concern-mongering of the people who don't have because we need to get it right, but they don't realize it.
by saying that, they make it even harder for them to get in the game. Because like every new technology, AI is not different. There will be good things and there will be bad things. And as long as we're doing what we're doing right now, where we augment the human, we're not replacing the human, we are extremely good as humans. Let's forget about the enterprise space for a moment if something works or not. You get a new AI, let's say a voice assistant, you talk about it one time, it understands you, and you're baffled.
It understands you. You say, not good enough. I need to edit. And editing on the smartphone slab, glass slab is hard. Or you say, this is a piece of crap. Maybe I'm going to try it half a year when they say they have a new version. Pardon my French here. So we as humans are really, really good at figuring out at the current level of AI where things are. And we can go into the deep fakes and so on where we're not so good anymore. Because we're trained another way. We're trained as our function. What we see is real.
which in most cases is right when we watch a movie we know it's a movie we put another filter on but if we're in the real world and we see a deep fake and think that's on a trusted newspaper and so on we might start believing it right so that there are other aspects of that but from the automation that we see in the enterprise helping me with performance review writing a job request vision helping with my benefits enrollment we find out really really quickly where it is good or where it's not good so i'm not worried about that part so
Back to your original question, sorry for the long answer here, which I normally don't try to give, right? So I try to be short and sweet and succinct. They are the ones who have realized this and doing this, and they're the ones who are still waiting for it and saying it's real. And the short answer short is to make answers short to answer who is listening here. It is real. It's delivering benefits.
with some caution at different levels at different vendors, of course, but everybody's working on it. And there's benefits which you should not miss. Like I even say provocationally, if you write, nobody's doing this, but if you would write a job description from scratch, the best practice before was,
Let me see if there's something general. What can I copy and paste? Nobody wrote from scratch before, but the copy and paste process is significantly longer than if Jenny and I write the new job requisition and then reviewing this and then doing a little copy and paste. So we see this going down to 20, 30% of the time of the copy and paste best practice, which was 20, 30% of the time of the I write this from scratch because I'm the best person to write. Which leads me to one of the big cautions, right?
where we get fooled so much when OpenAI came out, right? All our lives, we are learning to write. Writing is incredibly hard, right? So we think that something which is written in impeccable, perfect Oxford English, that must be a trusted source. That must be working. That must be correct. That is, of course, not the case when a machine writes it. It's not even the case when a human writes it because they may have evil thoughts and might want to influence us in a certain way, right? So this automatic...
competence gap which we run into if something is formulated right is something which we humans have to learn in the era of AI. It's something which is written in beautiful, competent, footnoted English which looks perfect still does not have to be right. Something which is true all the time, but 80% of us, including myself sometimes, get fooled by it and say, well, this is written so beautifully. It must be Shakespeare. It must be right. ♪
As we've heard today, Data Privacy Day isn't just a moment to reflect. It's a call to action for all of us in the AI community, whether it's combating deepfakes, decentralizing data, or prioritizing ethical analytics. The future of AI depends on the trust we build today. If you enjoyed this episode, don't forget to check the show notes for the full versions of each conversation.
And if someone you know would appreciate these insights, go ahead and share this episode with them. It might spark a great discussion. Thank you for joining this special episode of AI and the Future of Work. Until next time, stay curious and stay secure.