Human intelligence and artificial neural networks are fundamentally different, both organically and computationally. Human brains are far more complex, with greater degrees of freedom and computational affordances. AI models, even the most advanced, lack the creativity and problem-solving capabilities of human intelligence. AI excels at tasks like pattern recognition in numerical data, which humans struggle with, but it cannot replicate the nuanced, creative problem-solving that defines human intelligence.
Dr. Bronfman argues that AI should focus on tasks humans are poor at, such as analyzing numerical data, rather than trying to replicate human intelligence. This approach increases productivity and complements human capabilities, creating a 'better together' dynamic. Chasing artificial general intelligence (AGI) is misguided because it lacks a clear definition and measurable progress, whereas productivity-focused AI delivers tangible value.
Dr. Bronfman believes there should be a clear separation between AI developers and regulators. Technologists should not decide how AI is used ethically; that responsibility lies with governments. He emphasizes that every technology has the potential for both good and harm, and AI is no different. Regulation is essential to prevent misuse, and AI developers should provide tools for regulators but not dictate ethical guidelines.
The primary challenge is the talent gap. Data scientists, who are essential for building predictive models, are scarce and expensive, with many working for large tech companies like Google and Amazon. This makes it difficult for smaller companies to hire and retain skilled data science teams. Pecan AI addresses this by enabling business and data teams without data science expertise to use predictive analytics effectively.
Dr. Bronfman is skeptical of the pursuit of AGI, arguing that it lacks a clear definition and measurable progress. He believes the focus should be on enhancing productivity by addressing tasks humans struggle with, rather than trying to replicate human intelligence. He also warns that AGI research in a business context without regulation is dangerous and could lead to misuse.
Current AI models lack creativity, which is a hallmark of human intelligence. They excel at tasks like pattern recognition but cannot generate truly innovative solutions or assemble knowledge from different domains to create something new. Dr. Bronfman argues that AI's inability to solve problems creatively means it is far from achieving anything resembling human intelligence.
I'm sure the vast majority of the community will agree. Intelligence is far more than just being able to reproduce an answer to a question or to summarize a piece of content or write a poem. Intelligence is first and foremost about problem solving.
Now, it's a very wide term, but in reality, the AI that we have today is very limited in its ability to flexibly solve real problems.
Good morning, good afternoon, or good evening, depending on where you're listening. Welcome to AI and the Future of Work, episode 310. I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service. Our community is growing. I get asked all the time how you can meet other listeners. To make that happen, we just launched a weekly newsletter on Beehive.
Check it out. We will not spam you. We will send a link in the show notes to how you can subscribe. It's a great way to meet other listeners. And gosh, we even share some interesting tidbits that don't quite make it into the shows. Join us offline in that newsletter. If you enjoy what we do, please tell a friend and give us a like and a rating on Apple Podcasts, Spotify, or wherever you listen. If
If you leave a comment, I just may share it in an upcoming episode like this one from Marianne in Sebastopol, California, just up the road from me, who's an artist and listens while painting. Marianne's favorite episode is the one with Professor Meredith Broussard, author of More Than a Glitch about algorithmic accountability. That's also one of my favorites. Go check it out. Marianne, thank you for listening.
We learn from AI thought leaders weekly on this show. And the added bonus, you get one AI fun fact each week. Today's fun fact, Maxwell Zeff writes for TechCrunch about why many in the tech community hope California State Bill 1047, which proposes broad regulations on big tech LLM vendors, isn't signed by Governor Newsom, our governor in the state of California. SB 1047,
tries to prevent large AI models from being used to cause "critical harms" against humanity. The bill gives examples of critical harms as a bad actor using an AI model to create a weapon that results in mass casualties, or instructing one to orchestrate a cyber attack causing more than $500 million in damages.
The bill makes developers who produce models that cost at least $100 million to train liable for implementing safety protocols. My commentary, it would be preferable to have AI vendors self-regulate to avoid AI risks. That's unlikely to happen. As a result, SB 1047 is a step in the right direction.
It will catalyze AI ethics conversations in other states across the US federal government and hopefully governments outside the United States. Governor Newsom, I'm told you listen to the podcast. I urge you to sign SB 1047. And of course, this is an important topic. We'll continue to unpack it in subsequent episodes. Of course, we'll share the link to that article in the show notes.
Now shifting to today's conversation. Next time you think you've accomplished a lot in life, read the bio of today's guest, Dr. Zohar Bronfman. He's the co-founder and CEO of Pecan AI, which uses conversational AI to do predictive analytics in minutes from custom datasets.
Pekin has raised about $120 million from a murderer's row of investors, including Insight Partners, GV, the investment arm of Google, and Dell Capital. Dr. Bronfman holds not one, but two PhD degrees from Tel Aviv University, one in computational neuroscience and the other in philosophy of science.
He also obtained his master's degrees in computational cognitive neuroscience and theoretical biology. He did his undergrad in econ and business at the Open University. He's published 18 scientific papers in peer-reviewed journals and taught the history and philosophy of brain science at Tel Aviv University. Dr. Bronfman served in the IDF's 8200 unit where he worked in the signal intelligence division from 2005 to 2007.
And without further ado, Dr. Bronfman, it's my pleasure to welcome you to AI and the Future of Work. Let's get started by having you share a bit more about that impressive background and how you got into this space. Hey, Dan, it's great to be here today. Thanks for having me. I'm delighted to share a little bit about us.
My beginning of the story goes probably to the beginning of my grad school, as you mentioned. Personally, I was extremely interested in understanding how the mind works. I think this is still, in my mind, one of the biggest questions humanity can ask.
And I thought there was no better place for understanding how the mind works or even what is it, the mind, than philosophy. And that's how I started studying philosophy with a focus on understanding the mind, which is called philosophy of mind. Going through the philosophy of mind studies, I realized that they tell a very interesting story that has a very profound meaning.
implications, personally, to me, in how I think about those topics. But I also felt that there was an element missing from the discussion, and that element was the more scientific computational element. And that's why I decided in parallel to go and study computational neuroscience, which is the field of building machine learning AI statistical models to emulate
and potentially explain brain processes, which is like the empirical or computational or scientific, whatever you want to call it, aspect of the same questions that philosophy is asking from a conceptual framework. So I did these two PhDs, and I was very fortunate in the computational degree to also meet Noam, who is today our co-founder and CTO here at PICANN.
We basically fell in love with the field that is often called machine learning or predictive analytics or data science. These are synonyms in many cases. They all refer to the same concept of you take historical data, you let the machine run different types of statistical models, and you basically identify hidden patterns in the historical data.
in a way that allows you to generate a prediction about the likelihood of a future event based on the same data. So the first time we've done it, I remember it was probably more than 10 years ago now, the first time we've done it,
I really had goosebumps. We looked at some data and we were able to predict a certain mundane behavior of an individual. But the fact that you can rely on brain signals or on human behavior and make a fairly accurate prediction as to what they're going to do in the future,
which you could have not done using your unarmed brain, just felt like, you know, almost like blasphemy. It's quite an eye-opening experience. You know, when it happened, I remember Noam and I sitting and kind of asking ourselves, why aren't all businesses using machine learning to create predictions about everything in their business and they just optimize themselves to become just far better in everything they do?
We said, let's try and understand what keeps companies from really leveraging AI in a meaningful way, because it's not as abundant as you would have expected. That's basically where Pekin came to play. We graduated from our PhD. We started this research of understanding the hurdles.
And we came to the conclusion that the number one challenge, if you would kind of look at a random company and ask yourself, why aren't they doing something predictive? The answer would probably begin and end with a talent gap. If we're thinking about individuals who can actually build talent,
these types of predictive models and relate them to the specific business context. We are referring to the so-called data scientists. They are a very scarce resource. They're hard to find, they're quite expensive talent.
And there's a funny anecdote I can share that, I don't know, 70% of them are usually working for the huge companies like Google and Facebook and Amazon and stuff like that. So if you just take a company, a random company, it would be very challenging for them to hire, retain and grow a data science team.
And that's where we decided that we are going to put this remarkable power of business predictions in the hands of business and data teams who are not data science experienced and proficient. And that's how the company came to be.
So going from a very academic environment where you were studying the philosophy of brain science to something super applied, like commercializing the technology and selling it as an entrepreneur must have required many kind of leaps of faith. Like what did you discover about yourself when you switched from academia and even the IDF to being the CEO and an entrepreneur?
I expected this change to be more dramatic than it actually was, in all honesty. I think the common thread, at least personally, between academia and the startup ecosystem is that curiosity and a desire to learn and constantly improve
is the underlying principle, personally at least. So in academia, let's start with academia. In academia, obviously learning is a huge part of the day-to-day, but I would argue that curiosity is not always very evident in the academic doing.
I think many times you'd see academics that are entrenched in their own landscape, in their own small peer group. They become very much submerged in that small peer group and not always looking around the shoulder. I saw it first
in that multidisciplinary attempt I had of learning both philosophy and computational neuroscience and seeing that there isn't enough curiosity between the various individuals, again, obviously as a generalization. And what worked very well for me was that I let my curiosity run free
And I always asked, why is that the case? What's being done? What has been done? What can be done? How can we creatively think about it differently? Can we infuse the existing body of knowledge within a certain domain with actions, with knowledge, with literature coming from another domain? This
Fertilization is extremely important in my mind. And switching to the realm of business and running a startup, I think it's very similar. By definition, almost, startup is dealing with things that are new all the time, that are of high uncertainty. You always have to make decisions with very limited amount of information. When doing so,
Again, I argue that being very curious, going to different areas and starting to gather pieces of learnings and trying to consolidate them would just help you make those decisions in the best way possible. So to me, relying on curiosity and relying on what I would like to believe is an ability to learn, constantly learn and improve,
as being a guiding principle that helped me kind of course between those disciplines. I've never had an opportunity to ask this question, but given your area of expertise, I've got to ask it. So
In the world of technology, I think we very casually toss around the brain neural net analogy. And it seems like there are certain ways where it holds up very well, going back to maybe Jeff Hinton and kind of the earliest neural nets. But what little neuroscience I know says that there are pretty obvious ways when that analogy breaks down. We'd love to know from your perspective,
How, in fact, is the field of predictive analytics or machine learning, machine intelligence like and then not like how the human brain operates? So I'll start maybe with the bottom line up first. They're not the same. They're far from being the same. You call it neural net both for the brain and for the algorithms.
But in reality, they are very different, both obviously organically, physically, but most importantly, computationally. They're not the same, not the same at all. I would go even further to say that AI is not necessarily really artificial intelligence. You know, as a society, we term things, we give things names and terminologies, but we
It doesn't necessarily mean that the fact we call something a neural network or we call it an artificial intelligence means that this is what it is. It's a catchy name. It's a great name. Philosophically, it's not artificial intelligence.
And same computationally, it's not a neural network. The analogy breaks very, very fast and very early. I'll just give a couple of examples. The complexity, just for us to understand how complex the brain is and how complex human intelligence is, I would go and argue that the neural network, even the most complex, biggest
neural network with recurrency and convolutional windows and rugs and whatnot have far less complexity.
and far less degrees of freedom in the way they are being shaped than the brain neural networks. Not to talk about the computational affordances, the embodiment, and many other elements that the brain networks have and the algorithms do not. And like I said, we can take it further even to artificial intelligence,
People have been talking and deliberating and debating about what intelligence means for probably as long as humanity is around. We won't solve the final definition of intelligence today.
now, but I can definitely say, and I'm sure the vast majority of the community will agree, intelligence is far more than just being able to reproduce an answer to a question or to summarize a piece of content or write a poem.
intelligence is first and foremost about problem solving. Now, it's a very wide term, and again, we can argue and debate about what problem solving means, but in reality, from that perspective, it's not, I think, personally, it's not really full-blown artificial intelligence. It's actually far from it. It doesn't mean that it can't give a ton of value
if it's being used in the right way, it can and it will and it is. And more interestingly, in my mind is that, and that's how I think about machine learning and AI in general, I actually think that the interesting part is not to reproduce human capabilities. It might be interesting from a research perspective because there are interesting implications, even therapeutic implications, if you're able to emulate human thinking,
But in reality, the interesting part, especially when you think about it functionally and from a productivity perspective, is to actually implement things that humans can't.
can't do well. So I don't think we should chase at the Turing test, in my mind. I think it's actually a wrong test for guiding progression when it comes to artificial intelligence. I think we should progress at the vector of anti-Turing test. What are the things that we are the worst that the machines can actually excel at
so that they complement our capabilities and really generate a dynamic of better together. And I can give you just one example to make it intuitive. As humans, we are terrible.
at extracting patterns of numerical series, like series of numbers. If you would look just at a series of stock price numbers, you won't be able as a human to see anything in it. So if you see a series of photons that hit your eye, you are immediately able to recognize even the most nuanced face. We're very good at translating
light into a figure. We are better than computer vision in 99% of cases. But when it comes to looking at a series of stock numbers, we can't even grasp the simplest of rules. We don't see it.
Because evolutionary, that wasn't the way our brain was evolved. There was little pressure on identifying patterns in numerical time series. And there was a lot of value in identifying nuances in facial expressions. The machine, however, is amazing at identifying patterns in numerical series.
That's, for the machine, even easier than identifying minor facial expressions. So I'd say if we are looking at benefiting human activities and making people more productive and help, you know, basically increase capital and productivity like the classic theory implies,
then running mostly around the vector of improving things humans are poor at is going to be more lucrative than focusing on, quote unquote, substituting for capabilities we already possess. That's my basic theory here. So building on that,
A few weeks back, we had a guest on named Peter Voss, and Peter Voss was one of the people who coined the term AGI, artificial general intelligence. And like you, I believe that term has become bastardized to the point of being meaningless.
And beyond that, I think it's dangerous because billions of dollars get spent by companies like OpenAI to, quote, achieve AGI. And when I asked Peter Voss, why should that be the objective of technologists to achieve this thing no one can really define called artificial general intelligence, he gave a very
dissatisfying answer. And it was something I'm going to paraphrase. And because I have an opinion here, my opinion will come through. But something like because we can, we should. In my estimation, a lot of the technology companies are accelerating the misuse of AI and contributing to these conversations about
AI ethics and responsible AI because they're, quote, praying to false gods. So with respect to, it sounds like, you know, how you and I feel about AGI as being kind of, you know, the wrong objective, talk us through kind of your pitch to the technology community with respect to what you just said, which is so satisfying about using AI or machine learning to do things humans aren't good at
And what would kind of your pitch be to those who feel like chasing AGI is something that we should do because we can do it? So, you know, everyone is entitled to their own opinion, obviously. But I can tell you that from my point of view, we don't even know what AGI means. I don't think there's an agreed upon definition. I'm not even sure there's a working hypothesis on how you measure AGI. In all honesty, no.
What I'm going to say is more of an intuition at this point, but my intuition is that we are light years, light years from AGI, and LLMs didn't take us even a centimeter closer. How will we know when we get there? What is it? So I...
If you, so that's exactly the point. So the easy way to go about it is to embrace something along the lines of the Turing test that might be a little bit more complex than just having a conversation because we know having a conversation is not a good indication anymore of intelligence.
Because we basically built a huge dictionary. And I can double-click about it in a second when we talk about LLMs and why LLMs are not getting us anywhere closer to AGI. But to your specific question, if we take the Turing vector, it would be solving problems that humans are...
trying to solve at the highest degree of cognitive effort, and that the machine will be able to solve it in a way that will be indistinguishable. When I say solve problems, I don't mean solve a chess game or a puzzle game. I mean really solve a problem from a creative point of view, and that's a crucial point then the creativity
The ability to create something that wasn't there a minute ago is the hallmark of high-end problem-solving intelligence. Doing something that either through pure innovation or through assembly of different components from different domains
generates something that is brand new. That would be the holy grail of human intelligence in my mind. By the way, not only human. You have some of that in other animals as well, in obviously a diminished way. The lack of creativity in the type of...
neural networks and other types of AI models that we are currently seeing, to me, kills the whole deal. And I don't think it will be able to produce something like that anytime soon. I just want to double-click on what I said earlier because I think it will be of interest to our listeners here. When I say you built a huge dictionary,
I'm referring to a very interesting thought experiment from the 80s from a philosopher named John Cyril. He had this thought experiment called the Chinese room. He basically said that if you're sitting in a huge room, locked room, and you have exact instructions,
as to when you get a slip of paper that is written in Chinese, you know exactly how to eventually go through a deterministic set of rules and come back with an answer in Chinese without doing anything in yourself other than just taking different letters from the different counters
and you provide back with a perfect answer, does it mean that you, the person who has been doing it, know Chinese? And obviously the answer is you don't know Chinese. You're just operating based on the symbols and you have this infinite dictionary that you work upon. And this is exactly what the algorithms are doing today. They don't understand Chinese.
anything in the deep sense of understanding, they just reproduce words based on their statistical distribution in a very, very impressive manner because of their size and ability to digest pretty much all the text we have on the planet. So
I don't think LLMs are taking us closer to artificial general intelligence. I'm not even sure what artificial general intelligence is when people are referring to it. There's a set of definitions, but it can be argued upon in many, many different ways. I think that we see value in... We should remember, investing in technology should go hand in hand with some assumption on value.
Right. That's the key concept in capital investment. We invest in something because we think it will basically increase the pie for all of us. And the best way to increase the pie historically was always by increasing productivity. It doesn't matter how you measure productivity, but if you increase productivity, you're increasing the pie and then you can invest more. And that's the nice cycle of economy.
I don't see how artificial general intelligence is increasing the pie in itself. I definitely see how supplementing human limitations help you increase the pie. So my view of investment in AI is that it should be focused on sound assumptions on productivity enhancement versus
call it, we should do it because we can. First of all, I'm not sure we can. I haven't seen any evidence whatsoever that we can or that we've succeeded in any way of doing it. But on top of that, I think that the argument we should do it out of curiosity and because we can is a legit argument in academia.
And this is an extremely important distinction. And I'll tell you why I think it's an important distinction. And it actually refers to what you said at the beginning of the episode. I think it's very dangerous if the people who are researching AI and artificial general intelligence for the sake of curiosity, and let's do it because we potentially can, do it in a business framework with no regulations.
for these types of research efforts that are very relevant. It's a completely legitimate research question. Can I build a machine that solves problems in a creative manner that humans do it to the highest cognitive effort? Super legit experimental question, but it should be done in a regulatory environment.
Like you mentioned at the beginning of the episode, letting the privately owned businesses to also own the regulation around the AI research and development is, to me, a recipe for a disaster at some point. And I don't see what is the justification for it. It's such a satisfying answer that I've got to ask you an important follow-up question. So we're aligned on...
just the fundamental difference between human and machine intelligence. But now I believe that as technologists, we are also, whether we like it or not, becoming ethicists. So I want to ask you just a thought question. So as the CEO of Pecan,
What is your obligation to make sure, let's say, customers that write you a check for PECAN and plan to use the technology the way it was intended, choose to use it in a way that it wasn't intended. And they choose instead to try to, let's say, crack the nuclear codes or hack into bank accounts, which very realistically, you could use predictive intelligence tools to, you know, a nefarious actor could do things like that.
How do you protect against that kind of thing? And even what is your obligation as a technologist to think about the potential unintended uses of your technology? I hold a very extreme opinion to this specific topic. And I would say that I think there should be a complete separation between
complete separation between the people who work on these technologies or any other technology for that matter and the people who decide how to use it and how to regulate it. When someone is working on a dynamite or a gun, it doesn't stand to reason that they will also be the ones responsible for deciding when using a gun makes sense and when it doesn't.
A gun can save people and gun can do a lot of harm. I see no difference whatsoever when it comes to AI. And actually, technologists are not in a good position to advise regarding regulatory actions.
That is, in an ideal society, or even a semi-ideal because we don't live in an ideal society, unfortunately, that's literally why governments exist. And we tend to forget it.
But people who are very good at developing AI do not necessarily understand what society needs or doesn't need. We shouldn't expect them to be experts around it. In all honesty, I would say they're not experts around it. And the fact that they are very good in developing AI and myself included doesn't make me certified to decide how this should be used.
There's an interesting saying in Hebrew. I don't know if it translates well to English, but it's you don't let the cat guard the cream. We can't. We can't decide if a specific use is ethical or not. Why AI is different than any other technology out there? Any and every technology that is being invented has the potential to improve our lives and has the potential to destroy it.
And we have to be transparent with regards to the technology. We have to provide tools that the regulator requires, but we cannot, as an AI community, take any responsibility on the ethical side. It's just a complete abnormal situation in my mind. Each one of these answers is really the subject of a whole other conversation.
And first off, I appreciate you for letting us go completely off script. None of this was anything that we had prepared for, but I think it's a more interesting conversation as a result. If you wouldn't mind, we're about out of time, but maybe I could invite you to come back another time and we can kind of continue some of these, particularly the thread on the ethics. I think it's a very nuanced opinion that you have and worth a longer discussion. Sure, with pleasure. I really enjoyed it.
Likewise. I really value your time. Thanks for doing this. I know it's the evening where you are as well. Before I can let you go, where can our listeners learn more about you and the great work that your team's doing? I'd say PKN.ai is probably the best thing. Just send them towards our website. It's quite self-explanatory.
Good. Well, we're just getting started. We'll pick up there when we reconnect, right? Thanks so much for coming and hanging out. Sounds good, Dan. Appreciate it. Guys, that's all the time we have for this week on AI and the future of work. As always, I'm your host, Dan Turchin from PeopleRain. And of course, we're back next week with another fascinating guest.