We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode 309: Dr. Seth Dobrin, CEO of Qantm AI, on the AI Revolution: Job Creation, Cultural Bias, and Preparing for Rapid Workforce Changes

309: Dr. Seth Dobrin, CEO of Qantm AI, on the AI Revolution: Job Creation, Cultural Bias, and Preparing for Rapid Workforce Changes

2024/11/4
logo of podcast AI and the Future of Work

AI and the Future of Work

AI Deep Dive AI Insights AI Chapters Transcript
People
D
Dan Turchin
S
Seth Dobrin
Topics
Dan Turchin: 人工智能将导致一些工作岗位消失,但也创造新的工作岗位,这是工业革命的特征。我们需要关注AI对现有劳动力的影响,并帮助那些因AI而失业的人们找到新的工作。 当前关于AI的担忧是否过早,或者是否存在一些根本性的不同之处,导致了目前的混乱局面?十年后,工作场所中普遍使用但现在看来像是科幻小说中的技术是什么? Seth Dobrin: 机器学习和AI在基因组学和天体物理学领域率先实现了从计算机科学和数学向现实世界应用的过渡。生成式AI模型在提供参考文献时,准确率很低,大部分时间都无法正确引用。Web3和AI的结合可以创造围绕知识产权的经济体系,解决AI模型错误引用或未正确引用学术文献的问题。 企业领导者最关注的是AI监管,特别是欧盟AI法案的合规问题。企业在遵守AI法规方面面临诸多挑战,包括如何让团队参与数据收集、如何评估供应链风险以及如何获得充分的法律保障等。在购买生成式AI模型时,企业应该要求供应商提供充分的法律保障,以应对潜在的诉讼风险。 生成式AI将对就业产生影响,虽然短期内可能导致一些工作岗位消失,但长期来看,它将创造更多新的就业机会。我们需要关注AI对现有劳动力的影响,并帮助那些因AI而失业的人们找到新的工作。沙特阿拉伯的“沙特化”政策要求企业对员工进行AI相关的再培训和技能提升,这是一种有效的应对AI对就业影响的方法。如果不对员工进行再培训和技能提升,AI带来的经济增长将无法实现,因为大量失业会影响经济活动。企业转型成功的关键在于信任和诚实,要与员工坦诚沟通,帮助他们适应新的工作岗位。 当前关于AI的担忧并非过早,AI技术发展速度空前,对人类的影响尚未得到充分重视,技术本身也存在固有的问题。未来AI技术的主要应用领域将是医疗、军事和工业等能够对人类社会产生重大影响的领域。AI可以消除算法中固有的偏见,但需要谨慎避免将特定文化的偏见强加于其他文化。AI模型的开发主要集中在中国和美国,这导致了特定文化视角被强加于全球的风险,即“技术殖民主义”。十年后,人们与技术互动的方式将发生显著变化,将更多地采用基于AI的对话式界面和VR/AR界面,而不是传统的计算机界面。

Deep Dive

Key Insights

What is the potential impact of AI on the workforce according to Dr. Seth Dobrin?

AI will eliminate some jobs while creating new ones, similar to past industrial revolutions. The key difference is that this transformation will occur within a 10-year period, much faster than previous revolutions, which spanned multiple generations. This rapid change necessitates proactive measures to reskill and upskill workers to transition them into new roles.

How does Dr. Seth Dobrin propose addressing job displacement caused by AI?

Dr. Dobrin emphasizes the importance of transparency and trust in workforce transformation. He suggests that companies should openly communicate with employees about job changes, provide time and resources for upskilling, and offer support for transitioning into new roles. For example, during his tenure at Monsanto, he allocated 20% of employees' time to reskilling and provided certifications, resulting in 60% of employees retaining their jobs and others transitioning smoothly.

What are the ethical concerns surrounding AI attribution and academic research?

Generative AI models often fail to provide accurate attribution for content, with 46% of references being completely wrong and 47% being partially incorrect. This undermines academic integrity and denies authors proper recognition and compensation. Dr. Dobrin suggests leveraging Web3 technologies, such as blockchain, to create immutable records of ownership and attribution, enabling a token-based economy to compensate authors and ensure proper referencing.

What challenges do companies face with the EU AI Act?

The EU AI Act introduces stringent regulations, particularly for high-risk AI use cases. Companies are concerned about compliance, as the definition of high-risk use cases remains unclear, and there is a lack of certified auditors for many standards. Additionally, companies must navigate supply chain risks, ensure proper data collection, and secure indemnification against potential litigation related to AI models.

How does Dr. Dobrin view the cultural biases embedded in AI models?

AI models often reflect the cultural biases of their developers, primarily from the US and China. This can lead to 'technological colonialism,' where these cultural perspectives are imposed globally. Dr. Dobrin warns against embedding cultural constructs into AI systems, as bias is a human construct, and algorithms must be designed to account for and mitigate these biases to ensure fairness across diverse cultural contexts.

What is Dr. Dobrin's perspective on the rapid pace of AI innovation?

Dr. Dobrin believes the current pace of AI innovation is unprecedented and concerning. Unlike previous industrial revolutions, which unfolded over decades, AI's impact is occurring within a 10-year window. This rapid change leaves little time to address human impacts, technological limitations, and ethical concerns, such as hallucinations in AI models and the environmental impact of energy-intensive AI systems.

What does Dr. Dobrin predict about the future of human-technology interaction by 2030?

By 2030, Dr. Dobrin envisions a shift away from traditional computer interfaces toward conversational and wearable technologies. He predicts that VR, AR, and AI-driven wearables will dominate how people interact with technology, making devices like laptops obsolete. This transformation will be driven by advancements in AI and the integration of immersive technologies into everyday life.

What is the concept of 'technological colonialism' as discussed by Dr. Dobrin?

Technological colonialism refers to the imposition of cultural biases and perspectives embedded in AI models developed primarily in the US and China onto the rest of the world. This can lead to the global dissemination of specific cultural norms and biases, marginalizing other perspectives. Dr. Dobrin highlights the need for diverse representation in AI development to avoid perpetuating this form of cultural dominance.

Chapters
The conversation starts by acknowledging that AI will eliminate some jobs while creating others, similar to previous industrial revolutions. The focus is on how to support those who lose their jobs due to AI.
  • AI will cause job losses and creation of new jobs.
  • Industrial revolutions historically lead to job displacement and creation.

Shownotes Transcript

Translations:
中文

And so how do we take care of the people that are in the workforce today that are going to lose their jobs? Because there are going to be people that are going to lose their jobs because of AI. AI is going to get rid of some jobs and create new jobs. Full stop. Jobs are going to go away and new jobs are going to be created. That's what an industrial revolution does. Good morning, good afternoon, or good evening, depending on where you're listening.

Welcome to AI and the Future of Work, episode 309. I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service. Our community is growing. I get asked all the time how you can meet other listeners. To make that happen, we launched a newsletter out on Beehive, where we share weekly insights and tips that don't make it into the podcast, as well as opportunities to get to know the community.

We will share a link to that newsletter in the show notes. And of course, it's not spammy. We don't do anything with your contact information, but join the community. It's out there for you. If you like what we do, please tell a friend and give us a like and a rating on Apple Podcasts, Spotify, or wherever you listen.

If you leave a comment, I may share it in an upcoming episode like this one from Vitaly in Queens, New York, who listens on the subway while commuting to work and says his favorite episode is the one with Dr. John Boudreau from Cornell and USC about the future of organizations and the science behind improving team performance. That's a wonderful conversation. Thank you, Vitaly.

We learned from AI thought leaders weekly on this show. Of course, the added bonus, you get a one AI fun fact, and here it is. Amanda Height writes in Nature Online about the hidden risks of AI.

She says, "While generative AI tools have been widely adopted across academia, users might not be aware of their inherent risks. Chatbots don't often cite the original content in their outputs. As a result, authors can be stripped of the ability to understand how their work is used and also to check the credibility of the AI statements."

Academics today have little recourse in directing how their data is used. Oftentimes, research is published open access and it's more challenging to litigate the misuse of published papers or books than it is say pieces of music or art.

In fields such as academia, where research output is linked to professional success, losing out on attribution not only denies people compensation, but can also prevent authors from being able to defend misuses of their work. My commentary, the only path forward is a new, what I'll call a social contract.

We need to cultivate an awareness of who owns the content AI generates and always question sources and chains of custody for all of the content that we claim. Having easy access to half truths and misinformation doesn't absolve us from accepting responsibility for our actions and acting with integrity. Obviously, a passion topic for me, we'll continue to unpack that one.

And of course, we'll link to that full article in today's show notes. Now shifting to today's conversation. Dr. Seth Dobrin was IBM's first ever chief global AI officer. He's one of the most respected authorities in the AI business community. He recently published AI IQ for a human focused future strategy talent culture about the right way to incorporate AI into corporate culture.

He focuses on many of the topics we discuss weekly, responsible AI, AI for good, and how to reconcile AI enthusiasm and AI risks,

Dr. Seth is also the CEO of Quantum AI, a strategic advisory firm and an investor in founders building responsible AI companies at One Infinity Ventures. He previously was the president of the Responsible AI Institute and received his PhD in molecular and cellular biology from Arizona State University. Go Sun Devils. I met Dr. Seth recently at a CIO event in Philadelphia. Thanks to Hunter Moeller for hosting both of us on stage.

Dr. Seth's talk lit up the room and I just knew that you had to meet him. Without further ado, Seth, it's my pleasure to welcome you to AI and the Future of Work. Let's get started by having you share a bit more about that illustrious background and how you got into the space.

All right. Sounds good. Thanks again for having me, Dan. Really excited to be here. And then I'm going to take us off track after I introduce myself. I want to come back and talk about that article you referenced because I got a whole lot to say about that. But so I think you did a lot of my introduction. So I've been in the AI space since I did my PhD and people may scratch their head and say, what the hell do molecular and cellular genetics have to do with AI? But back in the late, mid to late 90s,

towards the tail end of the Human Genome Project, was one of the first two places that machine learning, AI, made the transition out of computer science and mathematics into real-world applications. So the first two places were really genetics slash genomics and astrophysics. And so was involved in

developing some of the algorithms that were used for analyzing analysis, large-scale analysis of genetics data that's used under underlying like 23andMe and analysis like that, as well as identifying diseases using that kind of technology. I applied that same kind of approach in startups, academia, research institutions, and Fortune 500 companies, including when I landed at Monsanto in 2006.

where for the first five years I was at Monsanto, I led their product development pipeline, what's called molecular breeding, developing an approach, moving it from a research project into an industrial scale project that is now how agricultural projects or large scale agriculture breeding works. And outside of even GMO, so whatever you think about GMO, this is how breeding

breeding works and just in general, even small scale breeding. And then after generating billions of dollars of value for the company doing that, I moved over and started applying it to general business problems. So that's when I made the leap from applying these tools

to scientifically focused problems to general business problems. And over the course of the next five years, my team and I generated additional $20 or so billion of value, cost savings, and new revenue for Monsanto. And so this is really the genesis of what is the basis of the book you referenced that I didn't, until people started talking about, didn't realize how big of a tongue twister it was. But you're not the first person to have the tongue twister

issue with it. And in 2016, I left Monsanto to join IBM as their first ever global chief AI officer and took this method that's in this book, this human-focused approach, and started applying it in transforming IBM, but also working with many of the world's largest customers at IBM, helping them

do the same thing that we did at Monsanto to transform using data and AI. I was also responsible for AI across the whole of the company at IBM, including the talent aspect of it. So how we hire people, what things we look for,

benchmarking, things like that, engagement overall. So very kind of leveling the playing field across at that point 450,000 person organization. Things like there were I think 75,000 data scientists at one point a month at IBM. There weren't really that many, but that's how many people had that in their job title.

And so how do we get that down to really, really how many are there so that when we benchmark them, they're real. And so in 2022, I left IBM, started my own consulting company. And then earlier this year, I started the venture fund that you mentioned, One Infinity, that focuses on investing in responsible AI. It's a half a billion dollar fund early to growth stage companies.

You teased at the start of that answer that you had some thoughts on AI and academia. So it's AI and attribution. And so the reference was AI and attribution. So if you look at generative AI, probably about a year ago now, there was a paper talking about attribution in generative AI models in general. So if you were to ask one of these generative AI systems to provide references for you, 46% of the time,

the references are completely wrong, have nothing to do with the response. So even to the author's point, even if it could give you the reference, 46% of the time, the attribution would be completely irrelevant to what it was. 47% of the time, it might be the right topic, but it would be the wrong reference. So that means that only 7% of the time would that attribution even be correct.

And so even if they were giving attribution, it would be completely wrong. Now, it's gotten a little better since then. No one's done a follow-up paper, to the best of my knowledge.

But so there's a bigger problem. Even if you were getting attribution, these models can't do the attribution. However, there are technologies that are out there that could enable to do what you suggested, Dan, which is to how do we allow people to own their attributions? And this is around the intersection of something that's coming up that I think a lot of VCs are starting to invest in, our fund included, which is this intersection of Web3 and AI, which

Which is where you start creating economies around things of value. In this case, it would be not knowledge, right? Academic knowledge. And it would be how would companies like OpenAI or Anthropic or other model builders, how would they engage with these papers? And how would they reference them and how would they compensate? And it doesn't have to be monetary. There's other ways that they could compensate these authors.

And how would they reference them and how would they provide some kind of benefit back to the authors? And these are stored on a blockchain, so it's immutable. And so there's lots of intersect, lots of opportunity to create communities that are driven by a token economy, much like a blockchain, but it's not crypto. It's not much like Bitcoin, but it's not Bitcoin. It's a different kind of economy that's driven by an independent community. And so there are technologies that you could actually build.

to drive an economy to help kind of start to address this problem. I don't know that it would solve it, at least on the ownership of the information side to prevent these companies from just taking the information. And still have it be public, right? So that you can satisfy the grant being agency. So NSF and NIH in the US, the UK versions, a lot of granting organizations require that you make your information public.

And so now these companies can just take it and not provide proper attribution. So how do you balance those two? Something like this intersection of Web3 and AI economy could help drive that. So storing the attribution on chain would solve the problem of the LLM hallucinating the attribution, but you'd still have a problem. Yeah, so not necessarily. Okay, because what I envisioned was the author, the human author, claiming through some kind of a Web3 marketplace that

the ownership of that content and then that gets put on chain and the LLM would have to reference the blockchain to understand the owner. The question I was going to ask, which I'm curious to get your take on is, then how do you verify the authenticity of the person claiming ownership? Yeah, so the author would put it on the chain themselves.

What if it's a non-author claiming ownership of that content? Or you could have NCBI put it on the chain, right? So you could have someone like that put it on the chain, right? A trusted authority, a third party. A trusted authority put it on the chain, something like that. So the model hallucinating is inherent in the architecture of the model. So just having something on a chain is not going to prevent it from hallucinating. Having a RAG architecture, retrieval augmented generation. But at least for the problem of attribution.

Right. For anything. Nothing eliminates hallucinations, right? Hallucinations are inherent in the architecture. And so until we move to a new architecture, hallucinations are here. You can reduce them close to zero, but you're never going to get rid of them. But it will help ownership of the attributions, and it can help with validation of the attributions. So this is just one slice of a much broader responsible AI approach.

conversation space in your travels, and I know you travel extensively. What are enterprise leaders talking about when it comes to the topic of responsible AI? Yeah, so I mean, there's a few things. So top of mind today is regulation, right? So the EU AI Act was just released, and

A couple of months ago, it's law now. There's a timeline for it. It's still a bit murky what it means to comply with it. It's even still a bit murky what a high-risk use case is. And so there's concern about that. I think there are some clear cases of what high-risk use cases are. What does it mean to comply with that?

Still, we really need to figure it out, but I think for sure you need to get audited. You need to do ISO audits or whatever. So I think three of the four required audits are going to be done by ISO. By ISO standards, there's one required audit that the EU is still determining what you're going to be audited against.

The challenge is there's no auditor for anything except for employment and automated employment and automated lending right now. So there's lots of use cases where there's no certified auditor. Got a few years to worry about it, but how do you do your audit? And those are only for one standard. That's only for 42,001. What about the other standards? How do you get audited? So there's still a lot of confusion as how do I even start to comply?

with these because that's a big third party audit, it's a big part of the regulation. There's a lot of concern about, okay, if I'm going to do these, how am I going to get my team on board? Because there's a lot of information I need from my teams when you think about we're building these models.

how do I get my team to start collecting all this information? Are there tools out there that can help me collect all this information? Because as my team is building models, they need to do this. As I'm buying models, how do I get all this information? How susceptible is my supply chain? Do I need to worry about my supply chain, my AI supply chain? Am I responsible for it? What types of indemnification do I need to have as part of this?

And so things like that around regulation are really, really important. And I think my response, especially to indemnification is,

We don't know the liability that comes with especially generative AI today. We know most models violate at the very least every data protection regulation that's out there. We don't know to what extent because it hasn't been litigated yet. And so I wouldn't buy any of these generative AI models without full indemnification. Most of them come with it, at least from that perspective. But I would be careful about engaging with any vendor that doesn't indemnify you

against litigation and things like that. But most of them come with it anyway. Yeah, so I think those are the big things. Regulations is always top of mind today with most of the big companies. And then a lot of companies are also asking, how do I use generative AI? But that's absolutely the wrong answer or a wrong question to be asking right now. Because you travel internationally a lot, I'm curious to get your perspective beyond kind of the EU AI Act, but

cultural differences in terms of attitudes and then also just comments on the geopolitics of AI. Yeah. It's interesting, you fly around the world and you talk to different companies and everyone wants to know who's ahead in AI, what part of the world.

You know, it's really not a part of the world that's ahead. It really is. And it's not even an industry that's ahead, especially when you start looking at generative AI. It's hit or miss, right? It's a company here, a company there, because it's very cultural from a company perspective. It's not even cultural from a part of the world perspective. It's

company culture at this point because we're so early in the game, right? Which companies are very good at adoption? Which companies are very good at change? Which companies know how to do this very, very well and can adopt new technologies and are willing to adopt new technologies very quickly? So we're not even at a stage where there's any one industry or any one company that's better or any one country that's better at this. In terms of the geopolitics,

There's a few things going on. So one is, I think there's a few interrelated things going on. So one is generative AI is going to impact employment. I think net-net in 10 years, and I think this is general consensus, we're going to see an increase in jobs. We're in an industrial revolution. It's usually what happens. We've seen that in the past. Industrial revolutions produce new jobs, regardless of what we think when we're in them.

If you look at the past, we've always thought jobs are going to go away, we end up creating more. I think the big difference between what we're going through now and what has happened in the past is past industrial revolutions have occurred over three to five generations. Generations are 15 to 25 years. So you've had multiple cycles of people entering and leaving the workforce to transition, jobs leaving the workforce, and new jobs being created. The general consensus with this industrial revolution is it's going to happen within that 10-year period.

And so how do we take care of the people that are in the workforce today that are gonna lose their jobs? Because there are gonna be people that are gonna lose their jobs because of AI. AI is going to get rid of some jobs and create new jobs, full stop. Jobs are gonna go away and new jobs are gonna be created. That's what an industrial revolution does. How do we transition those people that are gonna lose their jobs and help them get new jobs?

I think that's where you're seeing some parts of the world do much better than others. And so I'm here and I'm in Riyadh right now. And Saudi has something called Saudification, which is in the Saudi companies, they are required to reskill and upskill all of their employees on AI.

Which is incredibly powerful because as their jobs go away, if they're making jobs go away, their employees now have skills of the future where yes, their job doing X, Y, or Z went away. So yes, they were a paralegal before just picking a job. That's probably some percentage of paralegals are going to go away. They were a paralegal. Well, now they can go do this other job that still takes advantage of their education and

But they can now do this other job that they've been trained to do. I don't yet know what that job is, but they know how to engage with AI. They can do it productively, and they can now go do something else and still be a productive member of society. Because Saudi is requiring the companies to invest in their employees. You see that in a few other parts of the world where the government is requiring the companies to invest in their employees. Because the companies are the ones that are benefiting from the Industrial Revolution.

And if you look at what McKinsey and others saying, you're seeing tens of trillions of dollars of addition to GDP over by 2030. If we don't invest in our employees, if there is mass unemployment because jobs go away, we are not going to see those tens of trillions of dollars of GDP. Because there's not going to be people, we're going to see unemployment, people are not going to be engaged in the economy. It is the benefit of the companies that we retrain and upscale our employees.

The markets are not set up to do that though. Public companies are not incentivized to invest in their employees.

They're incentivized to get cost savings and generate those cost savings as quickly as possible. We need to change the incentive structure if we want to do that. So there's a conflict here, right, in terms of what the incentive structure is in the marketplace today. So that's one thing. I'll pause there, see if you have anything you want to talk about that before I go on to the other one. Yeah, so I'm not letting you off the hook because this is where it gets interesting, right? So

We say, get the benefits of that increased GDP. That's from improved productivity, improved profitability, improved productivity. Productivity means jobs go away. Exactly. That's where I was going. So that's a proxy for jobs go away. So let's take the paralegal. So you're right to say it requires upskilling and reskilling. So I think the hard part of the conversation is telling the employer that

your team is going to be, let's say, 30% more productive. So what the employer hears is, great, I can reduce my headcount by a third. And what the employee hears is, I thought this was going to create jobs. What's the job that's going to be created for me? So if I'm that paralegal, how do I navigate this transition knowing that I am caught in the crosshairs? Right. And so I think, and this gets to

The transition, right? So this gets to good transformation. So my experience transforming companies is that the best way to transform a company is a big part of it is trust. And to start that trust is honesty. So when I started the transformation at Monsanto, we were completely changing jobs. And I brought my team together and I said, in six months time, none of your jobs are going to exist.

There are going to be four new jobs that are going to exist. I'm going to give you time at work. 20% of your time is going to be dedicated to fitting into one of these four jobs. I'm going to sign everyone up for Coursera. I'm going to pay for certifications. Your job is to pick one of these jobs, get all the certifications. And if you do it, job's yours. If you don't, I'll give you a package. If you don't want to do it, I'll be a reference for you. I'll help you get another job in or out of the company.

That kind of transparency is everyone knew where they stood. Everyone knew what their responsibilities were. There was very little concern about people losing their job because people knew what the net result was. The net result was 60% of the people kept their jobs. About 30% of the people left before time for packages. About 10% of the people got packages.

And those 30%, I helped them get jobs. My team helped them get jobs. They were perfectly happy because they didn't show up one day and not have a job. Even the 10% of the people that got packages, they knew they were getting packages. In fact, they waited to get jobs because they wanted the packages.

And so that kind of candor and honesty with your team helps. And so if you go to those paralegals and you say, look, here's the skills you need. 30% of you are going to lose your jobs. Let's help you upskill. Let's help you get new jobs. Let's help you figure out how you fit into the workforce, whether it's here or somewhere else in the future. That's a whole different conversation than we're going to make productivity gains and sorry, you're on your own.

Because these people have been part of getting you here where you are. Companies are made up of human beings, right? And we have to remember that they're not resources, they're human beings. And you have to take care of them as human beings. And I think the higher up you get in the organization, the more we talk about human beings and organizations as resources. And we have to remind ourselves that they're not resources, they're human beings.

The very satisfying answer. And by the way, that transcends, that's nothing about AI. That's just about what it means to be a human first leader. It could good way and presumably at Monsanto that predated AI, right? Yeah, well, it was the beginning of, well, it was machine learning. Okay, so you mentioned that previous industrial revolutions take kind of call it a generation to play out. Multiple generations. Multiple, yeah, I mean. Three to five generations is what they took. Yeah, they span decades.

Yeah, certainly. And we talk about the first industrial revolution, 1760 to 1830, 70 years. And here we are.

less than two years into the launch of a chat GPT on an unsuspecting global population. And we're already hyperventilating about the trough of disillusionment. It's not returning our investment quick enough and we've over invested in infrastructure and

Is all of this hand-wringing premature, or is there just something fundamentally different that warrants all of the chaos? I think there's something fundamentally different. I mean, there was an article two days ago that said we're already through the hype cycle. It took us seven years to get through the cloud hype cycle. So there's something fundamentally different about this. The technology is advanced at an unprecedented rate. Do you agree that we're through the hype cycle?

Yeah, I do. But I think it's also a little bit concerning for a few reasons. I think it's concerning for the reasons that we talked about. People aren't taking the human impact seriously enough. There's another human impact that's more globally cultural that I term technological colonialism that we can talk about.

There's a technological, there's inherent technological issues with the architecture that most people have no idea about, that I write extensively about in my newsletter. That's very concerning, that vendors come and kind of discharge and kind of poo-poo away that say our technology can completely get rid of it when no technology can completely get rid of it. And so those three things combined,

very concerning that we're moving through this so fast because there's no time for any of those to resolve themselves. There's no time for the startup ecosystem to be able to address those because I think that startup ecosystem drives some of the innovation that either replaces some of the bigger companies or gets consumed by the bigger companies to fix some of those things.

So that innovation kind of helps to fix some of those things that get sucked up by the bigger companies. And there's just no time for that because this is happening so fast. So that's a little bit concerning for me. The reason I question where we're at in the hype cycle is because the technology is comically immature. I mean, in so many ways, you mentioned hallucinations is one kind of vector, but also they're expensive and

They are very not energy. They're energy inefficient. We're primarily restricted by, you know, silly, you know, text box interfaces. They have no experiential awareness of the world. So to me, you know, there's a lot of maturation that has to happen before we're even ready to call the technology, you know, a beta. And yet, you know, how can we

the wave have crested yet. That's my perspective. Yeah, I mean, I think there's a race to the bottom on the cost. I think cost is a non-issue. I mean, I think cost is going to, yeah, there's a race to the bottom on the cost. I think that, so I'm not worried about cost.

The energy, the race to the bottom, the cost of three vendors are pretty much eating the energy. And I think I liken the energy issue to the oil crises, right? We've been running out of oil for 50 years. We still have plenty of oil.

That'll get resolved through technology, new chips. There's an environmental issue for sure, whether it's electricity or developing new chips because microprocessors are notoriously non-recyclable, right? So whether we're throwing away old GPUs and we can't recycle them because they're full of rare earth metals or

whatever, there's an environmental crisis looming, whatever it looks like. So those are definitely issues. The technological issues relate to hallucinations, but there's also other ones that are associated with that. And the text box thing, I think that's one of the upsides to it. You don't need a fancy interface. You can talk to these things.

I would say we're probably going to move away from interfaces over the next few years. I think you're going to see a lot of conversational interfaces. So I would actually say that's a feature, not a bug in the technology. So I envision a world where the true promise of these technologies is in the medical field, in medicine,

military, industrial complex, things that really global warming, cures for disease, things that I feel like will really have a meaningful impact on life far beyond writing haikus about cats or editing marketing documents. So when I say the wave has encrusted, I think about

all of the positive impact on society and humanity that the technology can have that we haven't started dreaming big enough yet. Yeah, I think, I mean, in my, this is just opinion or my perspective, I guess, is most dual use technologies, which AI is a dual use technology,

you can do really great things with them and you can do really horrible things with them. And I think AI is no different, right? You can do some incredibly hard things with AI and there's some amazing things that you'll be able to do with them, right? And you rattled off

I think AI can do some tremendously good things. I think it can cure disease, help cure diseases. I think it can actually help reduce or eliminate biases in a lot of critical decisions that impact human well-being. And I frame them as health, wealth, or livelihood of humans. Bias is a human construct, right? Bias is caused by past decisions of humans. We can develop algorithms that account for that.

and prevent the systems from generating those biases that are inherent in things that we do. We are biased whether we want to be or not, right? It's just part of human nature. Algorithms are not, math is not biased. And so AI can actually

eliminate that. Now, bias is a cultural construct. What's biased in the US is different than what's biased in China, is different than what's biased in India, is different than what's biased in Africa and South America. And so we have to be very, very careful about imposing our cultural constructs on the rest of the world. And so we have to be careful about not ingraining

our cultural perspectives into these models.

Because then we're inflicting our culture on the rest of the world, which gets into that whole concept that I was alluding to of technological colonialism. And there's many things underneath that because most of these models, or the vast majority of them, if not all, are being built in China and the US. So essentially, you have two cultural perspectives being inflicted on the rest of the world right now. And it's Chinese men and Indian men and white men and Indian men that are primarily the ones that are developing these models overwhelmingly.

I'm reading your newsletter about your perspective on technological colonialism. And if I can maybe convince you to come back, I'd like to have another conversation just about that. I think it's a really important topic and I get to get you off the hot seat in a minute. So I'm not going to have time to unpack it, but I really, I want to put a pin in that. It's a really important concept. I think you do a really good job of articulating.

Before I am letting you off the hot seat, you got to answer one last question for me. Sure. So we're talking about kind of the rapid pace of innovation. Let's say Seth and Dan are back here in 2030 for 10 years, and we're having a version of this conversation, which I know based on the current pace, that seems like an eternity. But 10 years from now, what's one thing that is different and commonplace at work that today would seem like science fiction?

Yeah, I mean, I think this is the hardest question to ask. I mean, everyone asks, people usually ask this in five years, and I say this is impossible to answer. 10 years is even more impossible. But I think we talked about it a second ago, right? I think the way we interact with technology is going to be remarkably different. You know, Apple started to get there with the Vision Pro where, you know, I'm on my laptop right now. I think that's going to be a thing of the past.

I think there's going to be a lot more conversational interfaces. There's going to be a lot more VR, AR interfaces. I just think, and that's all going to be AI driven, right? I mean, it's going to be wearables that are going to be the way you interact with the world of some kind. It's not going to be computers.

You know what, you and I are going to have a lot of time between now and 2034 to unpack some of these topics. It's a fantastic conversation. Thanks for hanging out. I know it's late where you are. I appreciate you doing this. I do want you to tell the audience, where can they learn more about you and the great work that you're doing? Yeah, so yeah, you can head over to my LinkedIn. Just look up Dr. Seth Dobrin on LinkedIn. I'll come up. Or you can go to my sub stack, SiliconSansNews.com.

And subscribe to that. Get everything free for a month. If you want to look at old stuff, you have to subscribe. Subscriptions are less than a cup of coffee.

And yeah, and happy to connect with me on LinkedIn. Thanks again for having me, Dan. Really appreciate it. It's been fun. Absolutely my pleasure. Thanks for doing this. The book is in fact a tongue twister AI IQ for a human focused future. How was that, Seth? Did I get it right? Perfect. Yeah. Excellent. Go out and buy the book. And thanks to everyone for listening. As always, I'm your host, Dan Turchin from PeopleRain.

And of course, we're back next week with another fascinating guest.