Welcome to Money for the Rest of Us. This is a personal finance show on money, how it works, how to invest it, and how to live without worrying about it. I'm your host, David Stein. Today's episode 417. It's titled, Will Generative AI Take Over Your Job?
Almost eight years ago, in September 2015, we released episode 72, Will a Robot Take Over Your Job? We concluded that computers and robots will eventually be able to do everything. But as humans, we can decide what we are willing to let robots do and what we want humans to do.
Two years later, in December 2017, we revisited the topic in episode 184, Massive Job Losses Are Inevitable, But There Will Still Be Work. We discussed how automation and technology, including robots, has always led to massive job losses.
But the increases in efficiency and productivity lead to higher overall average incomes that can be spent on goods and services, and that spending, those higher incomes create more jobs. Now, that's not to suggest that some individuals are impacted, they lose their work due to automation. And
and maybe aren't able to get back on their feet. But across the entire economy, higher incomes due to increased efficiency and productivity creates more jobs. We can think about the jobs in travel-related industries or consider how many more coaches, counselors, therapists, psychologists there are as people seek help to manage mental health challenges they might be having or just to improve themselves mentally and emotionally as they face the world.
Four years ago in episode 198, Capitalism is Creation, I shared a quote from author and computer scientist Kai-Fu Lee. He's the author of the book AI Superpowers, China, Silicon Valley, and the New World Order. It came out in 2018. Lee said, we're all going to face a very challenging next 15 or 20 years. And how
half the jobs are going to be replaced by machines. Humans have never seen this scale of massive job decimation. In referring to AI, he said that one of the reasons that we exist is to create and that AI perhaps can't do that as well. Now, again, this is back in 2018. We're seeing generative AI where AI is taking its transits
training its data set and creating things that have never existed before, such as images and paragraphs. Lee continues, one such direction is that we create. We invent things. We celebrate creation. We're very creative about scientific process, about curing diseases, about writing books, writing
writing movies, creative about telling stories, doing a brilliant job in marketing. This is our creativity that we should celebrate and that perhaps what makes us human. One of the outstanding questions, particularly as we see recent versions of AI, is are they becoming more human-like in their ability to create?
I was particularly struck by an article I read recently, thinking that my job as an educator, podcaster, investment strategist is somewhat protected. And then there was this article in Futurism. The headline was CNET is quietly publishing entire articles generated by AI. The byline for these articles is CNET money. The articles are financial explainers.
such as what is an annual percentage yield? What is compound interest? Should you break a CD early for a better rate? If you look at the About Me section of who CNET Money is, it says this article was assisted by an AI engine and reviewed, fact-checked, and edited by our editorial staff. It doesn't say whether anybody wrote it or not. The
The article was assisted. Does that mean the article was written? Did the editor write the article? Who wrote the article? If you look at the article on breaking the CD, it's a decent article. It covers the basics. It's also ranked on the first page of Google results. If you query, should I break a CD early? In the middle of that article, there's a list that's an advertisement for best CD rates and little later, best online savings accounts.
CNET is doing this because if a reader that wants to read about breaking a CD early then decides to go look at other CDs that might have a higher rate, which would be one reason to break a CD early, you could get a higher rate somewhere else, and that rate is higher than any penalty you might pay on that CD. But if a reader clicks on that link and buys a CD, then CNET will get a commission.
And we're not talking about small dollars here. NerdWallet
is a website, started out as a personal finance website, I think 10, 12 years ago, not that much longer before Money for the Rest of Us started. They've been incredibly successful at their model. They say they're publicly traded. They went public last year. The third quarter revenue released November 2nd was up 45%, total revenue of $145 million. They make money mostly through commissions.
on credit cards, on loans and other financial products. They say with
With our trusted financial ecosystem, we will provide a single platform that consumers and small and medium-sized businesses can rely on to learn about various financial topics, shop for products, connect their data, and receive data-driven nudges. They earn commissions. They write articles. And now AI can generate those articles much more quickly and get those articles ranked.
Now, how much of that article on breaking a CD early was the human input versus the AI engine?
Brett and I were discussing that in the CNET article, and while we were having the conversation, this was in our team meeting last week, Brett asked the artificial intelligence program, ChatGPT, to create a podcast script for Money for the Rest of Us. ChatGPT is a chatbot. It was launched as a prototype by OpenAI on November 30th, 2022. It took ChatGPT
chat GPT about 90 seconds to come up with a podcast script for our podcast. We didn't give chat GPT a topic. We just said the query was create a podcast script for Money for the Rest of Us. Here's the script. It says intro music plays. Host, David Stein. That's me. Welcome to Money for the Rest of Us, the podcast that helps you navigate the complex world of
of investing. I'm your host, David Stein, and on today's episode, we're going to talk about the current state of the bond market and what it means for investors. But first, a quick reminder to our listeners that the information provided on this podcast is for general informational purposes only and is not intended as investment advice. Always consult with a financial professional before making any investment decision. Now let's dive into the bond market. In recent months, we've seen interest rates rising, and that has caused some concern among investors. But what
But what does that mean for the bond market? To start, it's important to understand that when interest rates rise, the value of existing bonds goes down. This is because new bonds will be issued at higher rates.
making older bonds less attractive to investors. However, it's important to note that this doesn't mean that all bonds are a bad investment. In fact, bonds can still provide a stable source of income and diversification in a portfolio. One strategy for navigating a rising interest rate environment is to focus on short-term bonds. These bonds have a shorter maturity date and are less affected
by changes in interest rates. Another strategy is to look at bonds with a lower credit rating, also known as junk bonds. These bonds tend to have higher yields, which can offset the impact
of rising interest rates. It's also important to remember that the bond market is not a one-size-fit-all investment. Each investor's situation and goals are unique, and a financial advisor can help determine the best bond strategy for you. So while the current state of the bond market may be causing some concern, it's important to remember that bonds can still play an important role in a well-diversified portfolio.
Thanks for tuning in to this episode of Money for the Rest of Us. We'll be back next week with more insights on the world of investing. In the meantime, be sure to visit the website for more information and resources to help you make more informed investment decisions. Outro music played.
It wasn't bad. To me, the most shocking thing was ChatGPT knew something that most bond investors don't. I say knew because we're going to see in a few minutes how ChatGPT works. ChatGPT doesn't know anything from a knowledge standpoint. It just knows how to string words together. But the words it strung together was that non-investment grade bonds are less sensitive to interest rate fluctuations.
because they have higher cash flows. That's an element of what is known as convexity, that as interest rates fall, bonds become more sensitive to interest rates. Their durations extend out. A lower yielding bond is more interest rate sensitive. It
Its price will fluctuate more as interest rates change, more so than a higher yielding bond with higher cash flows, even if the bonds have the same maturity. A 30-year treasury bond with a 4% coupon has a 17.8-year duration, whereas a 30-year treasury bond with a 1% coupon or interest rate has a 26-year duration. ChatGPT didn't really mention that. Now, if it had, it would have been incredible. It's
especially if it did a better job explaining it than I just did. But it did discuss that non-investment grade bonds with higher cash flows are less interest rate sensitive. Who created this chat GPT?
It was created by OpenAI. They began as a nonprofit in 2015 with grants from Elon Musk and others, Reid Hoffman at LinkedIn, for example. Now they sort of have pivoted. They're going more corporate. They're raising funds from investors, including a billion dollars from Microsoft.
OpenAI is a research lab. And with that billion dollars, they have been renting machine time on supercomputers to make better AI models. And Microsoft wants to use AI's products and are starting to incorporate them in Azure, the cloud computing platform that Microsoft runs that's available to businesses. They want to incorporate it in Bing.
OpenAI CEO Sam Altman said of these AI tools, they are incredibly embryonic right now, but as they develop, the creativity boost and new superpowers we get, none of us will want to go back.
He's suggesting that AI will actually improve our creative abilities, that we'll be better at what we do because of AI rather than AI replacing what we do. But clearly, AI can replace what we do if AI is already generating personal finance articles that compete with other platforms where there are human writers.
In episode 256, Artificial Intelligence Change Investing, we took a closer look at some of the models, AI models out there, including the model that OpenAI uses. ChatGPT is based on GPT-3.5. It is what's known as a large language model. It is looking for patterns, statistical connections,
to decide what word comes next in a sentence. At its core, it's an incredibly sophisticated word predictor. It's not knowledge. It just knows what word should go in a sentence. They are trained, these large language models are trained by incorporating huge amounts of data.
OpenAI GPT-3 was trained on 45 terabytes of text data. It came from crawling the web, came from books, it came from Wikipedia. And then it was trained on about three years worth of processing, taking all of this reams and reams apart.
of input and forming the connections. And there was staffs involved with providing some of the training. They did it on supercomputers. And at the end of the day, they had this black box with multiple layers where it can be queried and it spits out an answer to that query, all based on connections between data. And it's like a neural network. But the scientist, the
that created that they don't know exactly what all those connections are. They don't know how ChatGPT came up with a specific answer. How did it decide of all the topics to talk about to generate a podcast script on bond investing? To use the examples that it did, it doesn't just copy someone else's work and reproduce it. It
It takes that work, let's say the work of Wikipedia, or hopefully, ideally things that aren't copyrighted, that are in the public domain, and then it makes all these connections into where they sound incredibly familiar. Before we continue, let me pause and share some words from this week's sponsors.
David Weinberger in his book Everyday Chaos wrote, Deep learning algorithms work because they capture better than any human can the complexity, fluidity, and even beauty of a universe in which everything affects everything else all at once. The scale and connectedness of machine learning results in their complexity. The connections among the huge number of pieces can sometimes lead to chains of events that can end up wildly far from where they started. Tiny differences can cause these systems to take unexpectedly short
sharp turns. We don't use these technologies because they're huge, connected, and complex. We use them because they work sometimes, most of the time, but we do use them because they're huge, connected, and complex, and they're not always correct. One of the problems is it depends on how robust and how accurate the data set is that is used to train these AI systems.
Some of the software forum, code writing forums have had to suspend answers provided by chat GPT because the answers are incorrect. They might look good, the modifications,
The moderators said, I think this was a substack, and they were super easy to produce. So people are flooding these forums with answers created by ChatGPT. If somebody asks a question on the forum, they run it through ChatGPT to get the answer. And then that's posted, super easy to produce, but it has a high rate of being incorrect, even though it looks plausible because it's just predicting what the next word is. Some other problems with these APIs
AI models, including image generators, which is DALI is another AI model by OpenAI. It can create an image based on some text. If you type in, like in the YouTube video example, a koala bear dunking a basketball, it will generate an image, never before seen image, based on that query.
But those images could be offensive in some AI models. A word generator could generate hate speech. And because it's creating things, images, paragraphs based on huge amounts of data, those connections can sometimes produce something that sounds very similar to something that may be copyrighted. Think about the lawsuits there have been on Facebook.
where somebody writes a song, gets produced, and they get sued because it sounds so similar to someone else's songs. Or a joke that was used. It was very similar to another joke. And in many cases, it wasn't a case of plagiarism. It was just a case of these were just coincidences. If there's a huge amount, a huge data set from these AI tools, they will create things that look very similar to what's already out there.
Another thing to consider is the ethics of it. Should we even be doing this? If you go to the OpenAI website on the homepage, it says we're a company of builders who care deeply about real-world implications and applications. Many of our most considerable advances grow from working across multiple teams. We are unafraid to go against established norms and explore creativity.
They're willing to go against established norms. Is that ethically correct? Are they so excited about the technology they just release it and we'll see, we'll deal with the consequences? One of the issues is should it be a closed or open system?
We aren't given much information about how GPT-3 1⁄2, which is the latest version, GPT-4 is on the way, how it was trained. Ahmad Mustaq, who's the founder of Stability,
another AI firm, he's a former hedge fund manager, said you can interrogate the data sets, you can interrogate the model, you can interrogate the code of stable diffusion, which is one of their models, and the other things we're doing, and we're seeing it be improved all the time. So in their case, they're taking an open approach and letting everybody see everything, making suggestions. Oops.
Open AI isn't necessarily doing that. They're making their models available, but not necessarily how they were created. But the question is, should we just allow these AI models out there? I don't see them not being out there. It's a question of how we choose to use them.
Take in the world of education. Colleges and universities are figuring out how to adapt to this technology, chat GPT, that's only been available for a couple months. In the New York Times, there was an article where it featured Anthony Amin. He's a professor of philosophy at Northern Michigan University. There was a paper, an essay submitted in his class where he said it was the best paper
paper in the class. It had very clean paragraphs, fitting examples, rigorous argument, and it raised a red flag in this professor, and he asked a student who said he had used chat GPT to generate the essay. Now universities are figuring out, what do we do? Perhaps they'll give more oral exams, more group work, making assignments handwritten rather than typed. Professor Amin says that they're
incorporating chat GPT in the discussion. He says, what's happening in class is no longer going to be, here are some questions. Let's talk about it between us human beings. He said, it's like, what also does this alien robot think? Chat GPT. But again, is that thinking, knowing what word to put next based on a high statistical probability that this should be the next word?
6,000 teachers from Harvard, Yale, University of Rhode Island, and others have signed up to use something called GPT-0. And it's supposed to be a program to detect AI-generated text. We'll see if it ends up doing that.
We also asked ChatGPT about investing using AI. We asked for another script, and I won't read you the whole script about how AI has the potential to revolutionize the way we invest, but it pointed out some unique things.
It says one of the benefits of AI in investing is that it can process vast amounts of data much faster than humans can. And this can give investors an edge by identifying opportunities that humans might miss. AI can be programmed to identify patterns and make predictions that are unbiased and free of human emotions, which can be a major advantage over traditional human-led investment strategies.
However, there are also potential risks associated with AI in the stock market. If a large number of investors use the same AI system, it could lead to market inefficiencies and increased risk.
The herding effect. I didn't use the word herding, but it's pointing out that if everyone's using the same quant model, AI, it could lead to herding that can push up the price of certain stocks to where they become overvalued. The AI-generated money for the rest of the script on AI investing said, AI systems are only as good as the data they are fed. And if the data is inaccurate or biased, the AI's predictions will be as well.
AI-generated script also gave an advantage to humans. It says AI is not capable of understanding the fundamentals of a company like its management, competitive landscape, and industry trends. Humans are still needed to combine the insights from AI with their own research and understanding of the underlying fundamentals of the company.
Generative AI is creating things that didn't exist based on being trained on what already exists and making novel connections using terabyte after terabyte of data. And we don't really understand what those connections are. All we get to see is the output. Who do you trust then? A non-sentient statistical word predictor? An incredibly good one. A AI that generates art?
Now, keep in mind that the open AI tools like DALI and ChatGPT is just the tip of the iceberg. The sophistication, I suspect, is much greater than just what we're seeing. So the scripts generated for our show were okay. It had some insights, but we couldn't publish that as an episode. What is missing? What about the experience that we have, the wisdom, the stories, the things that make us human?
There are areas where using generative AI will be helpful. Producing average copy, a basic finance article to put up on the web, it would do that. It doesn't take a whole lot of creativity to do that. And perhaps this is something Seth Godin pointed out with regards...
to copywriting. Maybe one reason AI-generated copywriting, which include finance articles, is so average is because most of the things that's been fed has been average over time. If we're different, if we're unique, better than average, then we can get an edge over generative AI. But it raises the question, what is intelligence? Is being intelligent knowing what statistically speaking should be the next word in a sentence?
Or is it knowing what the next word should be because we're alive?
We had ChatGPT create some sales copy for money for the rest of us plus just to see what it would generate. It was good, not great. It even used the word take your business to the next level, which is a term I don't necessarily care for because it's used so much in copywriting, but we've used those same words in ad reads from sponsors. But does judgment matter? Nuance? Is
Is it enough just to know what the next word should be, statistically speaking, or is intelligence more than that? Can we be better than that as creators?
We're going to have to be. Otherwise, generative AI will take over our job. If our job comes down to statistically producing what the next text could be or the next widget or something that can be done automatically, more and more, now it's moving into words and images. It's not just on the factory floor making widgets, which can be frightening to people. Frightening to me, but also fascinating. We talk about bias. Who suffers more from bias?
An AI engine that's been fed reams and reams of data or a human? We all have biases. This is happening so quickly. Fascinating development. There will be areas where AI, generative AI takes over jobs.
But we could also use generative AI to be more creative. We should use chat GPP to come up with podcast topics or areas of topics and see if they address something that we didn't think about and then make a better script using both. We'll see. That's episode 417. Thanks for listening.
I have enjoyed teaching about investing on this podcast for over eight years now, but I also love to write. There's a benefit to writing over podcasting, and that's why I write a weekly email newsletter called The Insider's Guide.
In that newsletter, I can share charts, graphs, and other materials that can help you better understand investing. It's some of the best writing I do each week. I spend a couple of hours on that newsletter each week trying to make it helpful to you. If you're not on that list, please subscribe. Go to moneyfortherestofus.com to subscribe to the free Insider's Guide weekly newsletter.
Everything I've shared with you in this episode has been for general education. I've not considered your specific risk situation if we've not provided investment advice. This is simply general education on money, investing in the economy. Have a great week.