This is Macro Voices, the free weekly financial podcast targeting professional finance, high net worth individuals, family offices, and other sophisticated investors. Macro Voices is all about the brightest minds in the world of finance and macroeconomics telling it like it is, bullish or bearish, no holds barred. Now, here are your hosts, Eric Townsend and Patrick Ceresna.
Macro Voices Episode 473 was produced on March 27, 2025. I'm Eric Townsend.
Freelancer.com founder and CEO Matt Berry returns as this week's feature interview guest for an update on the ever-changing world of artificial intelligence. We'll discuss disruptions in the business models for both hardware and software, the advent of AI agents, why the AI companies can't seem to make a profit even at the $200 a month subscription level for pro-tier users of ChatGPT, and what the future holds for AI. Part
particularly with regard to privacy of personal data. I'll also be moderating a debate on capitalism versus socialism for Zero Hedge on Thursday evening, March 27th at 7 p.m. Eastern Time, with Rutgers University professor Ben Burgess arguing in favor of socialism and Libertarian Institute managing editor Keith Knight arguing in favor of capitalism.
And yes, there will be a replay video made available for those of you who don't hear this announcement in time to watch the live stream on Thursday evening. Be sure to stay tuned for our post-game segment after the feature interview when Patrick and I will continue our discussion about portfolio hedging strategies.
And I'm Patrick Ceresna with the macro scoreboard week over week as of the close of Wednesday, March 26, 2025. The S&P 500 index up 65 basis points to 5712. Over the last week, the market short squeezed higher back to its 50-day moving average.
We'll take a closer look at that chart and the key technical levels to watch in the post-game segment. The US dollar index up 115 basis points, trading to 104.66.
The May WTI crude oil contract up 410 basis points, trading at 69.65. The retest of the 2024 lows held as oil bounced higher. The May RBOB gasoline up 323 basis points to 224. The April gold contract down 62 basis points, trading at 3,022.
Copper up 275 basis points, trading to 524. COMEX Copper at an all-time new high, but diverging from global copper prices. Uranium prices down 123 basis points, trading at 6425. The U.S. 10-year Treasury yield up 10 basis points, trading at 435. And the key news to watch this week on Friday is the Core PCE Price Index.
And next week, we have the ISM manufacturing and service PMIs and the U.S. jobs numbers. This week's featured interview guest is Freelancer.com founder and CEO Matt Berry. Eric and Matt discuss the state of the AI landscape, progression of the functionality, and the investment opportunities in AI. Eric's interview with Matt Berry is coming up as Macro Voices continues right here at MacroVoices.com. ♪
And now with this week's special guest, here's your host, Eric Townsend. Joining me now is Freelancer.com founder and CEO Matt Berry, who's also becoming a world-renowned expert on artificial intelligence. Matt just published a fantastic article, which I would consider to be a must-read. It's called
AI of the Storm. That's linked in your Research Roundup email. If you don't have a Research Roundup email, if you're not yet registered at macrovoices.com, just go to our homepage, macrovoices.com, click the red button above Matt's picture, which says looking for the downloads. Matt, why don't we start with the state of the union, if you will. What has changed in the world of AI in the six months since we last had you on? Well, it's been a dramatically changing landscape in the last six months. As it turns out,
You can assemble a ragtag team with a relatively modest budget and even work on potentially AI as a side project and deliver a model which can challenge the state of the art of what has been coming out of Silicon Valley. You're seeing this not just with independent private companies launching models here, there, everywhere, but you're also seeing it with open source efforts. And in fact,
Starting a foundational AI model seems to be akin to opening yet another Thai restaurant, albeit potentially a very good Thai restaurant in Thailand, while at the same time handing out the recipe book and the business plans. You're seeing...
come from left, right and center that you wouldn't expect. Elon Musk managed to assemble, did it the traditional way and managed to assemble a very large data center with Grok 3 and get access to the hardware and now has produced one of the leading models really giving OpenAR run for its money in the US. But,
But you're seeing efforts from all around the world tackling different parts of the problem, whether it's a team in France called Mixtral, or now the clear and present threat is coming directly out of China with the likes of DeepSeek, where literally just out of nowhere late last year, a team of 160 ragtag engineers at a hedge fund worked on a side project and dumped out a model DeepSeek V3 and then a second model DeepSeek R1.
which really challenged the state of the art of OpenAI's foundational models. And they managed to do so on a fraction of the budget. It's been rumored that the training that was involved for DeepSeek involved about 2,000 GPUs when it would normally take 20,000 GPUs. They achieved a 10 times efficiency in the training of these models through just some smart optimizations under the hood, a bit like taking a race car and doing some tinkering with the engine. They managed to speed up the training about 10 times. And as a result of that, the training budget increased
budget is rumored to be around $5 or $6 million, which blows up the water in comparison to some of the latest we've seen in the Valley where training models in the order of $100 million. So you're seeing all these competitive threats come out of nowhere and models are being released really at such a rate. I think just before we started the podcast today, you commented there was another deep seek model was just launched overnight, which people are trying to get their hands on and understand what that's about. But there's certainly competition coming from left, right and center.
At the same time, these models are getting a lot more sophisticated. So they're now multimodal. That means they can take in text, they can take in images, and they can produce text and images or take in PDFs and a range of different modalities and output in different modalities directly. So you don't have to really have a translation step in between them. And simultaneously with all of that, there's been a cataclysm in terms of the hardware layer, which we can talk about a second coming out of China as a result of US sanctions.
Matt, people in the AI community talk about a progression of functionality that starts with something called chat, then it goes to reasoning, and then it goes to agents. What do those three words, chat, reasoning, and agents, mean in the context of AI? Where are we now in that progression, and what comes next?
Well, chat really was the evolution of taking these models and really putting a chat interface on top of them. And so that's when GPT went to chat, GPT, and you could, all of a sudden, consumers could quickly
query the AI where previously it was really in the domain of computer science. And that's when we had that sort of explosion in 2023 where people said, oh my God, this is actually now very, very interesting, particularly as the models got sophisticated enough and consumed enough training data and had enough processing power behind them to give answers that actually were interesting enough to the average person and not just in the realm of computer science. That has led to a
a bit of an arms race in these models. And, you know, as I mentioned in the last episode, the training budgets were going up by an order of magnitude and so it's starting to hit $100 million per training run. The data requirements for these models were going up orders of magnitude with each of those training runs.
and the complexity was going up, order of magnitude. And that led to a lot of these competitive efforts in this hyper-competitive Thai restaurant strip in Bangkok, effectively, having to come up with better and more efficient ways to try and get better answers out of these models. And so that led to the development of a whole class of model, which are these reasoning models. And these are the ones where it's not so much stepping up the compute by orders of magnitude anymore. It's really getting these models to break down the tasks individually
into a, you know, the query into a series of tasks and really think through the logical chain of thought in order to try and get a better answer, whether it's a scientific problem or a mathematical problem or whatever it may be. So this is, this is for example, like the O series coming out of OpenAI, where really you can click on a little button while it's running. You can see its thought process as it goes down and thinks about your query, maybe ask some clarifying questions, works out how, how would I go about,
this problem and works through the thought process. And that reasoning is very akin to how a human might break down a problem. Like if you're sitting in an exam and you get asked an exam question to write an essay on something, right, you probably won't just jump in there and just write the essay straight away. What you'll do is you'll think out an outline, you'll think down the structure, what would be the introduction, what would be the conclusion, what are the key points I want to present in this before I actually go write my essay. So that's basically at a high level what these reasoning models are. And what that has led
together with multimodal capabilities, has now led to these agents being sophisticated enough and good enough in terms of the quality of their responses to be able to do human workflows. So, for example, tier one support at a call center for a bank
A lot of these sorts of jobs, you could probably write out the entire job function of someone answering the phones for maybe a retail credit card or what have you doing customer support. You could probably write out their job function in two or three pages of chat GPT prompt.
And combined with multimodal, it means that if someone uploads their bank statement or has a picture, if you just see all this, that, the other, maybe it's an AI agent helping a telco debug customer problems. The customer could, for example, upload a photo of the modem and which lights are flashing, which lights are not flashing or the computer equipment and what have you. And you can basically now start to do workflows and develop agents accordingly.
to perform roles that maybe were previously done by people, whether it's answer the phones, take an order, do outbound lead generation, what have you, or something a little bit more sophisticated, such as starting to do the role of a junior accountant or a junior researcher or a sub copywriter and so forth. So it basically got to the point where, you know, AI can start
really lifting the productivity of people by doing a lot of the basic everyday work that they would do in their roles.
And so that basically is the agent model. It's the ability for these AI agents to fulfill the roles of what human agents would do previously. And I think in the next 24 months, you're going to see an explosion of this. There's been a little bit of a, I think, a lull in terms of what people expect of the impact of AI, given they see these whiz-bang things come out of ChatGPT and other
and the like, but then they haven't really seen it in reality in real life yet. But I think very soon the penny will drop with the general, general public. And it may be something like as simple as, you know, calling up a bank and, and, and talking to someone over the phone or even doing a video call. And all of a sudden that,
by the bank is going to be done by an AI agent. It will be done instantly with high fidelity, low latency. It will be done in the language of the person calling up. So the human computer interface gets better because now you can do customer support in any language you want, any point of time, whatever.
with high levels of expertise. And, you know, one thing we've noticed because we've got this rolling out on my company, Freelancer, is the correspondence or the interaction is about 10 times more empathetic. Because it's not a human agent, for example, in a call center with a KPI in terms of number of tickets per day they've got to answer or certain other productivity metrics, you know, the AI has infinite patience, infinite time and access to all the world's information in its knowledge base. You actually see the quality and the empathy of
of the customer support queries or the sales engagements are in order to make you better than what a human would ever do because a human, for example, would never spend the time. If someone's calling into a bank and they've got a question about their credit card, they'll never spend the time to do full research on the account and figure out everything about the customer history in the CRM and be able to provide effective support. They just wouldn't be able to do it. But with AI, they can. And so that's why the answers are just so much better, so much more effective. And of course, any language, any time of day, instantly on.
Let's move on to the software landscape of the AI industry itself, particularly as it relates to investors. As you said earlier, it seemed in the beginning like, boy, OpenAI did something so cool with ChatGPT that it felt like they had a huge moat. They were a monopoly. Nobody could possibly compete with them. Turned out that was wrong. Where is this all headed? Well, I think, Eric, we talked about this in the previous episode where, you
You know, when OPA came out with ChatGPT 3.5, it was really truly a magical moment. The whole world was just wowed by this magic box. And at the time it looked invincible. You know, it was going to be the toll booth operator on the highway to humanity's collective intelligence. You know, every time you have a passing thought, you send it to the AI and then the API call to, you know, this magical, mythical intelligence.
And so they really went down two paths. They went down one path of having API access, which they would charge at the moment, you know, GPT-4, for example, is like $30 US for a million calls. They've got some cheaper models. They've got some more expensive models, but it gives you a feeling kind of,
for where they're thinking about the pricing. And then they had this consumer product, which was a freemium product. You can use ChatGPT for free, but if you want a much better model, you pay your $20 a month and you get access to that and you can actually use that in a more productive way.
The problem for them is, you know, it turned out that creating a foundational model is akin to opening a Thai restaurant in Thailand, right? So you've got all these different models that have come out of nowhere. They've got disruptively cheaper pricing in some circumstances or maybe better. It turns out you don't really need to have all the, you know, to raise billions and billions of dollars in order to produce a foundational model that might rival and beat OpenAI's latest and greatest, at least on some benchmarks. I mean, China,
You know, absolutely kicked the door open with DeepSeek when they kind of came out of the blue and showed that with, you know, 2,000 GPUs and difficulty in obtaining more because of sanctions that had been dropped, which I'll talk about in a second, between the US and China, they managed to train a model with a few million dollars and a side project.
And then that technology is now online and you can access the DeepSeek APIs, particularly in the Chinese cloud for a fraction of the cost. I mean, as Quillman was saying, imagine that your Ubers now all cost 5 cents and you can catch an Uber anywhere you want for 5 cents, 10 cents, right? It's that sort of level of disruption in terms of pricing. So the API approach has been heavily commoditized. There are many different providers now, many different models with the equivalent APIs, these literally drag and drop in.
Elon said specifically that the GrokMok APIs are just a plug and play replacement for OpenAI. And so I think that particular business model of charging pennies on a few API calls to the cloud is being dramatically challenged.
At the same time, I don't think they're making very much money at all out of the $20 a month subscription. And you can see that quite clearly when you use the product that, well, you see it with all the models, really. When you use the product, eventually you get timeouts and get put in the naughty corner and you can't use the
You can't make a call again for three hours or four hours. I mean, Claude is particularly notorious for that where, you know, I've used six to seven or eight queries inside the Claude product from Anthropic and they say you can't use the product for another four hours. It's very clear that this business model is not economic.
If it was economic, I think you've expressed some frustration before saying, gee, why can't I just pay them some more money and get some more API calls? Like, why don't they do that? Isn't that a bit silly? I think the reason they don't do that is because it's not economic and they're trying to figure out what is actually the business model, right? So if you look at OpenAI's model, you know, they've got the freemium model, they've got the $20 a month model where you get access to a better AI, basically. So you get answers that aren't, you know, computer-like answers.
They're now coming out with a $200 a month model, which is quite expensive for the average person, although there's some pretty amazing things going on with a GPT 4.5 and deep search, which we'll talk about in a second. But now they're coming out with an agent model where for $2,000 a month or $20,000 a month, you get access to this supposed marketplace of AI agents that will do various bits and pieces with you. Now, I don't know how successful that will be.
They've come out with a marketplace of pseudo agents previously with GPTs that kind of came out the bunch of fanfare then quietly disappeared into nowhere because it got very little usage. But clearly there's this challenge around the business model. Charging pennies per API call or thousands of API calls is not really sustainable. There's a lot of competition coming into that. I don't think they're making a profit on the way their models are actually structured at the moment with the $20 plan.
$200 plan is quite expensive for the average person. And now you have all this competition coming in from open source, all this competition coming in from China. And then you've got the other big shockwave that's come out of China, which is basically on the chip side.
Let's move on to the hardware evolution. In the beginning, it seemed like there was no way to do this other than to have the very latest and greatest NVIDIA. No other brand would do, but you had to have the NVIDIA GPUs and you had to have the absolute top of the line or else you just couldn't play in this game. That seems like it all changed. What happened?
So in 2019, Huawei released their own version of an AI chip. It was quite an underpowered chip compared to the NVIDIA line, but they released the Ascend 910 chip to start producing, you know, trying to have its own sovereignty in terms of chip supply.
The US saw that as a threat and slapped Huawei on the sanctions list, the entity list in 2020. The end result of that was a bit of a backfire in some regards. Huawei pushed on and doubled down in terms of the chip development.
and actually reproduced that chip in two years, effectively on their own silicon in their own way. Now, it didn't end up as powerful as the H100 line, but it looks like now that when they take that particular chip, it started off as 910 and they kind of made their own version of that. And now they're jamming two of those chips on a single die.
It turns out that 910C model is about 60% of the performance of Omidyar's H-100. So effectively in two years, China managed to replicate an older flagship in
Nvidia AI chip. And now they're producing it en masse in the hundreds of thousands per year. And the thing about this is when you combine Chinese AI software with Chinese AI hardware, you get a bit of a killer model. And that's where you can deliver the equivalent of
OpenAI's API product for anywhere from 2% to 10% of the cost. So it's really giving at least Chinese companies, because I don't know how many Western companies will make use of this in the China cloud, access to really, really cheap inference, which is going to lead to an explosion of products that are intelligent with Chinese powered AI in them.
And then, of course, you've got now all these US chip efforts to try and go after NVIDIA's dominance. And in fact, probably one of the strongest parts competitive threat for NVIDIA is actually coming from NVIDIA's customers because four of NVIDIA's customers generate about 46% of NVIDIA's revenue. And of course, NVIDIA's charging sky-high prices for their chips because they can, because they're the only game in town or have been up until now. And so you've got everyone from Google to Amazon coming up with their own
vertical integration chips where they try and produce specialized versions of AR chips that are directly suited for their applications. So you've got Google's Tensor processing units, you've got Amazon's Tranium, you've got a whole bunch of different angles from which NVIDIA is being attacked, both from its customers and from China and from other US players. So we will see how long, NVIDIA is still the main game in town, but we'll see how long that is.
Let's just translate that to investor language for a lot of people who wanted to be part of, you know, let's be on the AI trade, buy NVIDIA stock, hold it, it'll go up. It's gotten a lot more complicated, hasn't it? What does this mean for investors who maybe don't have your level of technical background, who want to be betting on AI as a trend, but maybe NVIDIA is a little bit overdone here?
Well, I mean, it's a little bit like, you know, I think Cisco back in the 2000s, right? Yeah, I think remember, you know, Cisco's motto was, you know, we network networks and everyone, you know, back when you had the internet booming and everyone thought, gosh, every single thing in the world is being connected to the internet. And Cisco is going to be at the heart of all that running all the hubs and the routers and the switches in order to do that connections. Surely Cisco is going to be the richest company in the world. And
you know, off the stock went and did a parabola and hit the moon, right? But as it then turns out, you know, other people can make switches and routers and hubs and a big market attracts a new entrance and you had an explosion of low-cost equipment coming out of China to do, you know, network equipment. And, you know, the Cisco bubble popped and it hasn't been back to where it was. And there's a possibility, I mean, NVIDIA still rules the roost.
today but there's a possibility that the same thing will happen with both us entrance with nvidia's current customers and also with um with china i mean the other the other big trend that's happening right now is is just how much ai is going to go into the edge and i think we talked about in a previous episode you know ai is pretty amazing like you have these ai functions in word and it helps write your paragraph for you or in gmail or this at the other but i do think that it's about to get to the point where ai is going to get stuck in a bit creepy with some of these features i mean
One of these products just might get a little bit ahead of itself. Like, you know, Gmail could, for example, put an LLM search interface in it, right? And you could type wonderful things into that LLM search interface. You could say things like, oh, find me that email I wrote 10 years ago about the Indian, um,
I have to find my visa number for India because I'm going there next week and they've asked in the visa application to find that number and I can't find it. So you write this sort of LLM query to try and find it because the current filters are pretty bad. But then you kind of think, okay, I'll put something up. Let's try some other filters. Like what would be the best way to compete against my company based upon everything you know?
And all of a sudden it spits out because it knows it or your email will be able to go and figure out, okay, what would be the best way to compete against my company? And, you know, what are my biggest weaknesses and which customer could be stolen from me the easiest or what parts of my business model could be attacked and quite lucrative for a competitor? And the LLM, because it knows everything, will spit that out. And I think people might start, companies might start realizing, oh, gee, do I really want all my data in the cloud and access AI in the cloud?
I don't want the AI to know about all my customers. I don't want the AI to know about my business plans and strategy and access my documents that I've got in Google Docs and my slide decks and my Google Sheets. I don't want it knowing everything about what I do. I don't want it knowing all my customers. I don't want it running my first-tier customer support and talking to all my customers and getting all the customer data because obviously I'm making API calls to the cloud every time a chat thread –
starts up with an AI agent. Gee, all that information, you know, Google is processing, Microsoft is processing, et cetera, and so forth, that becomes a real risk because as we've seen in the past that these companies occasionally do decide to enter industry segments and destroy their, you
What was their previous, were the previous customers, they're going to take out a segment like Google did with travel, for example. You know, travel was a big purchaser of Google ads and eventually Google just thought, I'm going to go buy a Sabre and then go in there and just, you know, dominate travel, right? Google, you know, go to google.com slash flights. You do a Google query now and it'll ask you about booking a flight and go ahead on with its competitors. So that may lead to
the model flipping a little bit and rather than these API calls to this giant brain and AI brain in the cloud and these gigantic data centers being built everywhere that are increasing, increasing costs. Instead, you might have this great unbundling and AI will go to the edge. So a bit like we had mainframes and then we had desktop computers and then we had network computers, thin client, fat client, and every cycle that kind of goes to the centralization and decentralization and backwards and forwards.
You may have this great decentralization of AI where chip manufacturers like Qualcomm produce chips that can run AI models. There's not the giant GPT 4.5 or Grok 3 brain, but it's specialized for certain tasks. It keeps the data local to your device. It doesn't go over the clock.
to the cloud. You retain your confidentiality and security of that data. And only once in a while you go to the cloud for very specialized things when your device can't handle it or your local edge computer within your corporate network can't handle it. And so once
this is a big trend that's happening right now. You know, it's lower latency if the AI model is on your device. You know, it's obviously got all the security and the privacy advantages. You know, it's probably going to be imperative in application areas like healthcare and so forth. But I do think we could potentially enter into an emperor has no clothes moment where a lot of corporates go, you know what, I actually don't want all my data in the cloud and have the AI sucking in and training on it. I mean, you know,
You know, if you're using these software packages now, by default, you've got to be careful. You've got to go through the settings and you've got to make sure that by default they're not training on your prompts. And a lot of these packages are a bit tricky now where they go, oh, do you want chat history? If you want chat history, you have to let us suck all your data into the cloud for training and you have to be kind of careful. So I think that's another big trend that could happen that could be pretty interesting to see whether we're building overcapacity of data centers.
And whether or not, you know, the investment opportunity might be in those companies that are producing the edge devices and the smaller chips and the smaller models, et cetera, you know, in addition.
It seems to me that that introduces a whole bunch of different possible outcomes because I would think, as you've described, that would start with a lot of corporations saying, look, we want these capabilities of AI. We do not want this stuff in the cloud. We don't trust AWS. We don't trust Microsoft to have all of our data. If I was a data center operator and I saw that trend coming,
What I would be saying is, wait a minute, we need to change this architecture of what cloud computing means. And we need to say that there's segregated clouds where you can have your own private, you know, little reservation of the cloud. So, yeah, it's a data center that you're contracting for. But instead of saying I'm going to use...
you know, Google's cloud, you're going to say, I'm using my own cloud services that I'm paying a cloud provider for, but they're going to be very securely segregated. Somebody's auditing that and I'm leasing the AI software functionality to run on my data. But the people who wrote that code never get to see my data. And the fact that they can't see it is somehow guaranteed through an audited process. Seems to me like that could change everything.
I mean, that's exactly right. And that's how the data center operator is going to have to think because I think we're going to have some pretty creepy, well, pretty powerful and pretty amazing and pretty shocking features come out in some of these AI-powered software. I mean, the big trend is now Copilot and everything. So you'll open up Excel, there'll be a Copilot in there. You open up your software programming software.
There's a copilot in there. Every bit of SaaS software in the world is going to have a copilot in there kind of helping you kind of power it along whether you like it or not. That's how it's going to work. But these copilots will know everything about you and might start getting a bit creepy and then...
For the same reason you go to banks and there's always the frustration that people say, oh, banks are so, they're dinosaurs. I can't share them at Google Slides deck. I can't share them at Google Doc. Their firewall prohibits it. Well, there's a good reason why they do that. It's because some of these big investment banks know that there's active intelligence gathering operations going on for the big deals they work on.
And so they don't want their documents in the cloud. They want them hosted in that local network and potentially they come across as dinosaurs by doing so, but they're protecting their data and they're protecting their customers' data by doing so.
And I think you're going to see the same thing in AI. And I think that might be a pretty shocking acceleration. And also, you know, if the next generation of handset comes out from Apple and, you know, the AI can run on that locally and you can talk to Siri and this, that, the other, and you don't have to go send it to open AI. And there's some sort of privacy constraints around that. I think that'd be very attractive to consumers.
Matt, I want to come back to AI business models and where they're headed and why they're not profitable. Because as a user, I want very much to spend more on open AI. Now, what I did is I had the plus tier, which is the 20 bucks a month thing. Following in your footsteps, I upgraded to the pro tier, which is 200 bucks a month.
Then I asked my other trusted advisor on AI about this, and I was talked out of that. I was talked into downgrading back down to the plus tier at $20 a month because my other trusted advisor said, look, the pro tier is designed entirely for people
people who are using APIs, programmatic interface, software developers, people developing agents and so forth. If you're just using it for chat with the release of ChatGPT 4.5 with deep reasoning, that's in the plus tier now. You don't need the pro tier. Like in the beginning, you'd had to have pro tier to get to the O1 pro model. Now the 4.5 with deep research is better than the O1 pro model and it all comes at the 20 bucks a month level.
Now, here's the punchline, Matt. This other trusted advisor I'm talking about
is none other than ChatGPT 4.5. That's where I got the advice to downgrade was ChatGPT told me, hey, you're paying for something that you're not getting any benefit from. Now, maybe that's wrong. Maybe the deep research version of ChatGPT doesn't work unless you have the pro model. I don't know. But ChatGPT told me that it didn't. I don't understand why they're suddenly giving me something I was willing to pay for
at a premium price at the lower price and it seems like they can't stay in business doing that it doesn't make sense meanwhile it sounds like they're not really offering me as a as a chat gpt consumer uh the benefits of upgrading from plus to pro going from 20 bucks a month to 200 bucks a month
really those benefits only accrue to software developers. For somebody who wants to use ChatGPT and wants to have a better experience, so it's not telling me wait four hours before you use this thing again, but it's just going at full speed. They don't seem to want to sell that to me. Are you saying it's because they can't figure out the business model to make it profitable and until they can, they don't want to sell anything because it's not working? I don't get it.
Well, I mean, I think the starting point is that none of those plans are making any money in terms of profit model. I think OpenAI last year was rumored to have burnt $7 to $8 billion in costs. And, you know, there's some data out there that kind of estimates what they're
their revenue was. I think they only made a billion dollars in 2023. And I think in 2024, that stepped up, I think, to about two billion, maybe, at least in the forecast that's been released. So they're losing billions of dollars per year. I don't think they're making money on the $20 plan. I don't think they're making money on the $200 plan. From what I understand, the $200 plan gives you what they claim to be sort of unlimited usage of their most advanced models, while the $20 plan gives you some access to those models, but not unlimited and will give you caps.
I will say one thing, though, that the $200 plan with GPT 4.5, but particularly with deep research turned on and only with that turned on, is a pretty –
eye-opening experience. I felt the magic again one more time of what these advances and models can do when I really started using it. So for those that are listening, I mean, this 4.5 plus deep research basically uses this reasoning sort of model where you can ask it a query, for example,
It may be, I'll give you an example. I went to the chiropractor last week and I hurt my shoulder. And when I'm in there, he said to me, oh, I've just bought this new office suite. And by the way, the office suite has a, somehow has signage rights. And so I can put up a digital billboard potentially on the side of the building. I don't know how to do that. I presume I have to follow a development application. That's going to be really complicated.
I said, I tell you what, if you can give me some impressions on the digital billboard, I'll write for you the plan of how you get it up and also the DA. And I pulled out my phone. I got into chat, GPT 4.5 plus deep research. And I just literally just wrote, you know, I own this building. Supposedly I have rights to do a billboard. I took photos of the wall, which is way up about eight stories high.
wrote a paragraph of text and hit return. It asked me if you're clarifying questions. And then I kind of walked back to my office. And in 15 minutes, by the time I got to my office, it had written,
15 pages with all the things that had to be done to apply for council approval and this planning approval and this that the other dot dot dot dot dot and then i said okay right from the development application and 15 minutes later i had the development application written so i mean the amount of complexity and difficulty in trying to figure that out for yourself and the quality of the work it really is at research level you know mid-level researcher level level work i mean
So yesterday I had my HR team send me the employee handbook for the company. I punched it through deep research and 4.5 and I said, rewrite it in the style of
the top tech companies in the world. So it gave me a version that was like Valve. It gave me a version like Facebook. And this is, this is a, this is a handbook that's, you know, 60 pages long. I had a friend the other day who's a budding movie producer and I got it to produce various research on, on how the cinematography should work for a particular film he's going to make.
It's pretty mental. So there are some features in that $200 version that I think are pretty magical. If you haven't tried them, I fully encourage you to try them and pay the $200. Although it looks like DeepSeek is...
is coming out with a new version called R2, which would probably, looks like it's, according to rumors, it possibly blows this away. So, and then there'll be other competitors elsewhere, et cetera, and so forth. So really the problem with these foundational models is they do all this effort and work and they produce, you know, a great outcome, but then it just turns out they're opening another Thai restaurant in a crowded Bangkok marketplace.
And I should just add, Matt, that for our listeners' benefit who don't use ChatGPT, what deep research refers to is a new feature that offers asynchronous access to ChatGPT. So instead of sitting there waiting for it to think of an answer, you say, I want a really detailed analysis of this subject.
What questions do you have for me that you need to know answers to before you can get to work? It asks you whatever clarifying questions it might have. And then it goes and takes 20 minutes or however long it takes. And it reports back to you and says, OK, I'm done. Your research report is finished.
And what it gives you as output is at the quality of hiring a McKinsey consultant to go and research something for you. It's completely different from what you used to get from ChatGPT back in the earlier versions.
Matt, let's move on to revisiting some of the predictions that we made in our earlier AI episodes. We both thought that there was going to be an explosion of phishing scams and other online scams that were driven by AI agents, essentially trolling people on the Internet and stealing their personal data and scamming them in one way or another.
I have noticed that the phishing scams that I get in email, the old trick of being able to spot them because of the bad spelling and grammar problems that they used to be plagued with because the scammers were not native English speakers. All of a sudden, they've all learned how to clean up their act and the spelling and grammar no longer gives them away. But I don't really perceive that there's been an epidemic of
automated AI scamming. What's your take on that and some of the other predictions that we made in previous episodes? Well, I mean, it's starting to appear. I mean, there's all these reports of families getting phone calls from their distressed daughter or son who's just been in a car accident. They've crashed into a car. There's been a pregnant woman in the other car, et cetera. They're now in jail and they need to pay a lawyer who happens to be there quickly in order to not stay the night and be released.
So that is starting to happen. You know, I have noticed in the scam calls that I receive, I'm getting a lot of calls now saying, oh, there's an Amazon delivery. Can you please say one for the following thing? Please say two, you know, please enter, you know, please say something to, you know, get to the next stage in the auto prompt. And I think what they're doing, at
at least I've read online that what some of these are doing is actually getting your voice, you know, so it can actually train a voice bottle to then steal your voice to then create another scam. So I think, I think we're on the cusp of seeing that big time, but I will, I will say I'm pretty surprised I haven't seen this, you know, really at scale, but I think we're not that far away. And,
In terms of other predictions, you know, I think open source is really going to be the win here for these models. They're multiplying like rabbits. DeepSeek is fully open source, including the weights, and that's going to be built on. We're going to see a bunch of things happening there. I do think that OpenAI is going to have a very challenging time. I mean, you've got Sam Altman picking a fight with Elon Musk, obviously, over turning OpenAI from a nonprofit into a for-profit.
You've got Elon's buddy, David Sachs, now as the AI czar, who described OpenAI as the piranha of a monopoly in terms of how they operate.
And he's obviously kind of like the, in a way, the overseer of the AI efforts. You've got the fact that eight of the 11 founding team have opened up and have left. You know, you're now tied up in this lawsuit. You know, the valuation is punch sky high. Microsoft has access to 75% of the returns until they recoup their investment, et cetera, and so forth. Plus this very weird corporate structure they set up in the beginning. I think there's a very good chance Sam Martin will rage quit in the near future and just start up his own company before he's sort of, you know,
hero to villain arc is complete, which will probably come out from the mudslinging in the lawsuit with Elon. So there's a chance that, you know, Altman will leave OpenAI, start his own company in whatever aspect of the AI space he thinks the real opportunity is, and then, you know, leave the rest of OpenAI to be kind of embraced and extended by Microsoft.
I also think that, you know, whether you're like it or not, AI is going to be in every single product and it's not just going to be in your SaaS products where I've talked about, you know, talked about extensively. It's going to be in hardware products. It's going to be, you know, the security camera on the wall is going to have a little AI model built into the edge, which will know what's happening in the scene. It will probably be able to figure out who's in the scene, what's going on and kind of predict intent rather than just being a dumb security camera. But you're going to see that really everywhere.
everywhere and i and i do think this you are going to have this emperor has no clothes moment where i think a lot of people and consumers as well i mean they think about all the data you put into instagram i mean instagram can faithfully replicate an ai model of you talking speaking in high fidelity video as you with your friends etc and so forth and i think that might get a little bit spooky because we've gone through this whole oversharing period where everyone's to share pictures of themselves doing everything every five minutes and i think they might start to realize with some of these creepy features like maybe it's not a good idea because they can never
replicate my likeness or at least anyone can who has access to my Instagram feed. And I think you might have a bit of thing like in the Bitcoin space, you've got, you know, this expression, not my keys, not my coins. You know, if you're hosting your Bitcoin wallet out there in the cloud and rather not locally, that it's actually not your Bitcoin. And I think you might get a bit of a not my AI, not my data, right? If it's not on my locally hosted private AI, you know, it's not my data anymore that's being fed into the AI that's just going to the cloud and it's being used to do training.
I think you're going to have, you know, competitors come in to try and crack NVIDIA's monopoly. I've talked about that. I think,
You've got a lot of regulatory chaos at the moment. The EU is out there kind of, you know, with their regulate first and destroy the industry before the industry even sort of starts sort of philosophy. They've got really hefty fines, which are up to 35 million euros or 7% of global revenue if you kind of breach these sort of rules. And they're kind of coming up with regulations for industries that haven't really been deployed yet. So they're kind of, you know, really causing problems.
if you're an EU AI startup. The US is a lot more laissez-faire in terms of the approach. And then you've got China that's being very authoritarian at one point, but also saying, hurry up and innovate on the other side.
Authenticity, I think, will be in high demand. You may have verified by humans badges starting to appear on platforms. I mean, if your Instagram feed of unemployed models in bikinis gets replaced by legions of AI thirst traps, it may be a premium to have the little badge saying this is actually a real human. Yeah. Yeah. Or else OnlyFans becomes OnlyBots, which I think is not too far away.
You're going to see some interesting things happen with distribution of content. We talked before in a different episode about Netflix and maybe Netflix will wake up in one day and kind of feel like it is to be Wikipedia in the age of chat GPT, right? Very soon you'll have the ability to type a prompt and pop out a movie or a TV series, right? So you could have, finally we'll get Game of Thrones season eight fixed, season nine, season 10 in space with
Eric Townsend as the lead character with all your friends as the other characters, et cetera, and so on. You're going to see all sorts of spinoffs and knockoffs, et cetera, and that may break down like these closed-world gardens of content distribution. And God knows everyone's sick of having to pay for Netflix and Amazon Prime and Apple Plus and this, that, the other and all these other, you know, you're supposed to be able to go just into one service and have everything there and that hasn't turned out to be the case. But I think you're going to see a lot of peer-to-peer content generation and so forth.
I mean, the other big thing is just we're going to enter kind of crazy town. We're not going to know what's true, what's false. It's going to be so easy to not just fake content or images or videos of people doing things, but you've got to do it at scale. You know, this is going to be weaponized by countries. It'll be weaponized by political parties. You know, you're going to the forums of your local media organization talking about, I don't know, something in Ukraine or something happening in China or something happening in the US, et cetera. And
The thousands of people that are talking and discussing that, whether it's on X or whatever, may not be real. They may be completely synthetically generated and having a whole discussion. You wander into this chat room or forum or whatever, or discussion online or online audio conference, and it just turns out everyone there is fake and you're the only person that's real there. So I think that is going to be pretty crazy. I think we'll see some disruption in some large companies
employers, you know, like offshore employers of people like call centers and BPOs. Some of them will be smart and the people who are kind of currently on the phones will be kind of writing agents and managing agents and queuing agents and so forth, which is kind of what we're doing in our big support organization. But I do think you get disruption into the banks and the telcos and so on.
I think you're going to continue to see what we're seeing with freelancers and they're now super powered, super skilled. They're all in the tooling. They're all independent mercenaries out there and they're really rivaling Western talent for getting things done just with a laptop and access to an internet connection. And I think you're going to see a bit of a backlash in some areas because there is going to be disruption in some industries where AI might steal your job or at least someone using AI might steal your job.
And, you know, you're going to have, you know, the young generation will be able to adapt to this, et cetera. The old generation won't. But I think you're going to have a bit of conflict there. And I think we'll continue to be surprised by some of these new foundational models. I mean, we've talked about in previous episodes how these kind of black boxes and some of these abilities kind of emerge from the models and you don't really expect it. You know, when you see deep research with GPT-4.5 in action,
It's a pretty magical moment that has popped out akin to the first time you use Mid Journey or the first time you use ChatGPT 3.5. So I think that's, you know, that will continue in ways that we don't expect and don't anticipate and not even the designers of these models anticipate.
And, you know, who knows? I think, you know, gaming is going to become truly addictive. You'll be able to live in a whole virtual world, high fidelity, just like you're in the Star Trek hologram, have relationships with these virtual characters that don't even exist. And for a lot of people, it's going to be a lot better than mundane lives. We've talked about the dating thing previously. You know, I don't know how you could go on a dating site in 2025 and think you're talking to real people. It's all going to be bots. And if not, it might be the AI digital dating assistant bots, right?
you know, or agents for. It's like, I'm too busy going to these dating sites and chat everyone up. I just put in my AI agent and just chat a bunch of people up and fill my calendar. And, you know, then I'll just turn up to a cafe and kind of just meet a bunch of people at different times over the course of the weekend and avoid the whole, you know, the preamble trying to get them to come and meet me on a date. And I do think we're going to see
something kind of global something in the large scale either on the threat side so we may see some global ai threat occur whether it's mass hacking because the ai is very good at things like hacking or it may be someone using ai to do something like i don't know you know fake the second coming of jesus christ i mean the only the only thing i know for sure is siri is still going to suck because apple says they're not going to improve the product till 2027 which is surprising
Matt, our Macro Voices listeners, of course, care most about what kind of actionable investment advice we might be able to offer them. It seems like it's a really difficult and increasingly difficult landscape now, because in the beginning, it was pretty darn easy. You know, NVIDIA just has to benefit from selling really the only game in town chips that everybody needed. It's not nearly so simple. That's
clear from this interview. Is there anything that is clear in terms of what you would invest in if it's not as simple as just NVIDIA has to benefit? I think every major industry is going to be transformed by this. And so I think some of the real money to be made is in the industry verticals. So for example, you know, in the real estate industry, someone's going to come out of nowhere and do effectively an AI agent better job of managing real estate rentals than what's
The current state of the art, right? And you'll see this, you know, in healthcare, maybe with diagnostics, or you might see it in call centers. Someone will come out of nowhere and deliver maybe an AI call center software platform, which will replace these, you know, 10,000, 20,000 people call centers that are run by the telcos and the banks, et cetera, and so on.
But I think there's going to be a lot of opportunity if you're kind of careful looking at each of the industry niches to see, you know, who's coming in, who really has a transformative solution. And it's going to be as disruptive, even more disruptive than it was when the internet came around or the mechanization of agriculture. So I think...
I think just look at various industry segments and kind of see the trends and look at the new solutions coming in. But I think there'll be a lot of money made there. And I think far more than trying to chase a parabola of mag seven stock prices. It seems to me as AI is becoming commoditized that what's needed is, if you will, the kayak.com, what they did for travel agent websites. You need the same thing for AI where maybe somebody develops a single AI-enabled device
user interface to AI. So I've got an interface that has a really robust chat history management system that allows me to do searches on my chat history and so forth. But when I type a prompt in,
What it does is it analyzes my prompt and says, okay, this particular prompt is really deep research. Let's send that to the chat GPT model. Oh, wait a minute. This prompt here, this is really looking for output, which is graphics.
ChatGPT is not as good at that as whatever some other thing is. Let's send it to that one. So it's more of an aggregator that allows me to have a single interface to AI. So I don't have to keep track of who's got the latest model and what's good at what. And it just sends my prompt to whichever AI, which becomes more and more of a commodity, whichever AI platform
in the background it thinks is best suited to that. Is anyone doing that? Well, I mean, that's where it's all trending. So it's trending to a local AI agent running in your local context, so maybe on your phone or sitting in your email or wherever it may be. It's trending towards not having one major model kind of do everything, but be able to context switch between different models based upon what the task is to smaller, specialized, better performing models for specific tasks.
And ultimately what you've described is really what Siri should be. I mean, it really should be something as simple as Siri that sits there monitoring all your communications, monitoring all your emails, potentially even being your AI chief of staff, right? Literally looking at all the things that are coming into your context in a given day and then going off and doing research and suggesting smart things in order to kind of really turbo power you. That's kind of where it all should be heading with the likes of things like Siri. Yeah.
You know, I want that functionality and I want it badly, but I absolutely will not tolerate having it be based on an implementation that puts my personal data in somebody else's cloud. Is my generation that feels that way likely to prevail or do we have a younger generation that's going to drive most of this who's not so concerned about having their personal data being maintained by Apple or Google or somebody in their cloud?
Well, I think there's a trillion dollar opportunity there for a company to come up and do that properly. Of course, that may be at odds with what the, you know, what governments and security agencies want. But I certainly think there's a clear market demand. I mean, Apple was kind of trying to position themselves as sort of that privacy focused service provider. But, you know, the recent things that they've done, for example, in the UK where they've folded and provided services,
Law enforcement access where previously they wouldn't has indicated that not even they maybe can be trusted with your data. So, but I think there's a trillion dollar company right there with that idea.
Matt, I want to end this episode with an appeal to our listeners personally, with the same conviction that I told you back on January 30th of 2020, that there was a global pandemic coming. I want to tell those listeners who, like myself, maybe you're over 50, you kind of feel like you're set in your ways. You haven't done this AI thing. You don't really do social media either. You don't really need this stuff. It's interesting to hear about it, but you don't do it yourself.
Trust me on this. You want to embrace AI and use it. I went from really a Chachi PT3. I played with it for a month at the $20 level. I thought it was an interesting novelty. It was not compelling enough to interest me in actually using it. That's transformed for me to the point where I couldn't get through a day without using Chachi PT. I have the $200 subscription. I think I'll downgrade on Chachi PT's advice forever.
to the $20 subscription and see if it still works as well. But I'll happily go back to the $200 subscription if I need to in order to maintain the functionality that I have. So I really encourage people to check it out. Matt, for people who are willing to take that advice,
Where do they start? Is it, you have a Twitter subscription so you can use Grok3? Is it OpenAI? What's the best place for somebody who doesn't yet have a paid AI subscription to get their feet wet and find out what this is? And I guess I would also couple into that question,
What do you do in order to, as a process, to really embrace this? Because something I found was it took at least a month to develop the habits to realize, oh, wait, I shouldn't wonder about that. I should just ask AI. I shouldn't ask my IT guy to help me solve an IT problem. I can just ask ChatGPT. It'll tell me step by step what to do. How does someone who hasn't tried it yet get their feet wet?
Well, I think there's two angles to that. First is on the consumer side and the second is on the business side. So on the consumer side, pretty much the AI, I think that's pretty state of the art at the moment is I've talked about, you know, GPT and getting access to the 4.5 model with deep research, which is just a subscription from OpenAI. I think Claude is pretty good for...
for writing marketing copy. And that's, you know, the Claude 3.5 or 3.7. Again, that's $20 a month. Just go to claude.ai. It has its few problems. It tells you off and it gives you ethics lessons every once in a while and puts you in the naughty corner for a few hours. But that's pretty interesting. And also Grok, which is either through your Twitter subscription or on grok.com. So on the consumer side, I encourage everyone to try that. There's other interesting things you can look at, such as Midjourney for image generation,
and 11 labs for voice synthesis. On the small business side, I think in the next two years, I mean, just very simple applications of AI agents answering the phones, taking a credit card, processing an order or making a booking in a calendar. Every single business in the world, whether small or large, will be doing this, whether it's a restaurant for a restaurant booking or a hotel reservation or even just a small business like a hairdresser answering the phones and so forth.
You can go to freelancer.com slash AI. You can actually get live demos of state-of-the-art AI agents and freelancers on our site. We'll build them for you. It's no different from web development, app development or AI. Now, AI development, the same sort of budget, same sort of complexity. You just go to freelancer.com.ai, try some of the demos there. You'll be pretty blown away and it's very accessible and very inexpensive for a business to adopt.
And Matt, as we close, I just want to add my personal endorsement for that as well. I've been hiring people from freelancer.com to assist me with graphics and to assist me with a bunch of things. They all seem to actually be operating AI for me. Their skill is knowing exactly how to use the right AI tool in order to do something.
that I don't know how to do. And I have no objection to that whatsoever. Matt, I can't thank you enough for another terrific interview. Before we close, just tell our listeners what your Twitter handle is or X handle, I should say these days and how they can follow your work.
That's right. I can't better call it X. I call it Twitter still as well. But it's Matt, M-A-T-T underscore Barry, B-A-R-R-I-E on Twitter. And I've also published the latest essay on Medium, if you just search for me there. It's called AI of the Storm, Eye of the Storm. And that's also linked in your Research Roundup email. Patrick Ceresna and I will be back as Macro Voices continues right here at macrovoices.com. ♪
Now, back to your hosts, Eric Townsend and Patrick Ceresna.
Eric, it was great to have Matt back on the show. Now let's get to the chart deck. Listeners, you're going to find the download link for the postgame chart deck in your Research Roundup email. If you don't have a Research Roundup email, that means you have not yet registered at macrovoices.com. Just go to our homepage, macrovoices.com and click on the red button over Matt's picture saying looking for the downloads. Okay, Eric, what are your thoughts here on the equity markets?
Well, Patrick, as I explained last week, the big thing to watch for on the S&P futures chart was a close above the 200-day moving average, and specifically whether or not a close above that level would bring on an acceleration to the upside or if we'd see a failure at that level. Well, we got our first close above the 200-day on Monday, but instead of accelerating to the upside, Tuesday saw only a very slightly higher high than Monday.
That was the first big tell that all was not well. Then Wednesday gave us a big red candle to close back below the 200-day moving average, testing the 8-day moving average during intraday trading and then closing right on top of the 5-day moving average.
Now, we still don't have a decisive technical signal until we see a close below the lowest of the three short-term moving averages, which right now is the 13-day moving average at 5698, 5698. But so far, all indications are that the failure of this rally just above the 200-day moving average that both Patrick and I warned about last week may already be upon us.
Eric, I want to just take a look at this market from one context, right? Which is, okay, it could always rally all the way back to previous highs. Things could turn bullish and the market can go ripping higher. It'll get above all of its moving averages and we'll have to acknowledge a new bull trend. But I wanted to step back and ask, well,
is this correction already over? Is this enough for what is a typical market correction? If we are underway in some sort of a bigger, deeper mean reversion of this huge multi-year bull market that we had over the last two years, what do these corrections look like? And so what we had was this three-week market drop that was about 10%.
And it was incredibly oversold, overdue for market bounce. And now we got a market bounce. And so you can see that on page two where we have the chart of the S&P 500 as we've now approached a 50% retracement and the 50-day moving average. So what I wanted to do was go back in history and look at previous market corrections and how they started.
And so on page three, I went back to the start of the 2022 bear market. And that's just recent. It was just like three years ago. And what we want to highlight is back in January of 2022, the market started with about a three week market drop that was about 12%.
And in the subsequent two weeks, we had a 9% market rally that was like a short squeeze reflexive rally that basically came right back to the 50 day moving average.
before the whole market rolled over and began its next leg of selling. On page four, I go back to the Christmas massacre back in 2018. And back in October of 2018, we had a market that dropped 16% in the first 30 days.
After such a violent market drop, the market proceeded to have an 8% market rally in the following two weeks that once again came back to retest the 50-day moving average. After that test, over the following two months, the market proceeded to have another major leg down.
Then I wanted to go on page five to the top during the 2007-2008 bear market. And what I want to highlight there is in October of 2007, after putting in its market top, the market proceeded over the following month to have a 11.5% market drop in just over a month.
It then proceeded to have an 8.5% market rally, a short squeeze, that lasted just about two weeks.
So I think that every one of our listeners can start to see a pattern, which is there's this first leg down and then a market bounce. So on page six, I have the current market, which is we basically had a 10.5% market correction. And so far, we've had about a 6% market rally as we've approached this 50-day moving average.
The point that I want to highlight is that so far this analog has just fit perfectly with that of all of the prior kind of starting points of something bigger.
And so this inflection point around this 50-day moving average as we're up the 6%, 7%, 8% off of the lows is actually going to be a very critical moment because if this is a bigger sell-off, if this is a bigger market turn, this going here into the first week of April is a highly vulnerable moment where the market can turn. And with us having things like
uh, the tariff announcements on, uh, April 2nd and the roll off of this JP Morgan options whale, which has, um, that, uh, dealer positioning has influenced short-term flows and that will be rolled off. Once all of these things are recalibrated, uh,
So next week is going to just be critical for the markets. And this is, I can't think of a more interesting time or more important time, if I should say, that our listeners need to consider what positions they have in their portfolios and should they be hedged on them.
Patrick, with what looks to me like a failed rally topping out here, I want to revisit our hedging conversation from last week. Now, you talked about collar strategies and you explained last week that buying puts to outright hedge downside risk was maybe a dubious trade because of the elevated VIX, meaning that you'd be paying through the nose for those puts.
But the VIX dropped back into the teens this week, and meanwhile, it looks to me like the S&P might have already peaked, and it's already back below its 200-day moving average. So it seems to me that while last week might not have been a smart time to outright hedge downside risk with puts or put spreads, maybe this week is. Is that right? And if so, tell us the best strategies to take advantage of this vol dip and potential dead cat top. Let's
Let's break this down. First, let's address the issue of the cost of hedging and hedging costs are associated with volatility. So on page seven, I have the VIX chart and we can see that even a week ago, we were up in the mid to high 20s on volatility, up at peaks near 28% implieds.
And now we've gotten down to the 1718 handle on the VIX. So we've already had this substantial reversion of volatility so that options that were very expensive for the cost of hedging just a week or two ago have come back in. Now, we're not down at the 12% to 15% level on the VIX.
which is typically where we would see it during bull markets, but this isn't necessarily a bull market. So we are actually talking about a level where hedges are once again at least somewhat reasonably priced. But on page 8, I wanted to look at the longer dated VIX. In this case, the 6-month VIX.
And while we did also have a spike in the volatility index for longer dated options up to about the 25 level, we've quickly come back down to the 20% level on those longer dated VIX levels. Now, what you can see on this chart is the fact that over the last six months, the
this 19 to 20% range has been the lower end of the cost of these options. And so we're in a situation where today buying a longer dated hedge is actually reasonably priced in line with the cost it would have been over the last six months. So we right now don't actually
have an options market that is pricing in big amounts of fear and therefore the cost of hedging is still very reasonable. Now you specifically referenced the caller strategy and I actually love the caller strategy. It's a fantastic way to reduce the cost of your hedges, which is you're buying a protective put to hedge out the downside risk
And you're helping finance that cost of that put by selling calls up above. Now, often this cannot be done at zero cost when you're going to equal distances because of the natural skews in the market.
But even if you can substantially reduce the cost of your hedge, it still goes a long way. So first of all, before we do an example, I just want to address the simple issue of why collar at all? Well, first of all,
If you're in a position where you believe that on the short to intermediate term that the market has no upside momentum and or has limited short-term potential on the upside, then selling away a part of that upside potential to help finance the cost of that downside insurance is a complete no-brainer.
So as an example, I'm just going to use the S&P 500. Now, the beautiful thing is callers can be built on individual stocks, on individual ETFs. In this case, I'm just going to use the cash index for simplicity. Now,
At the time we're recording this, the SPX was trading around 5,700. And so if we did a caller where we sold a covered call 5% higher and proceeded to buy a protective put 5% lower in the market, that can be done at a net cost of 20 S&P points, which is like spot 0.03% cost of the trade going out a month and a half.
Now, for that cost, you have basically contained all volatility you're going to experience, both the upside and downside, to within a 5% band over the next month and a half. To some people, this will give them a little bit of ease in holding through some of these very turbulent times. But to me, it's not just about reducing portfolio volatility.
But it's about the fact that if the market was to have a big downside event, it's that moment when you want and need cash. And that cash comes from closing out the collar hedge. And that cash injection allows you to have money to buy the opportunities of that dropped market. So if you don't hedge...
then you're feeling all of this pain and you're strapped on cash on the downside. While if you're hedged, you get all of this extra cash. You can use it to whether do some form of dollar cost averaging or stock repair strategy or simply take on brand new opportunities. And that's something that is often overlooked when considering whether or not to hedge a position.
Patrick, after you had a great turnout for your stock repair webinar, you're doing another special webinar for your Big Picture Trading members. Where can listeners find that and is it free to attend for Macro Voices listeners? Thanks, Eric. With the market bounce now testing the 50-day moving average and the VIX down in the teens again, it is a very opportune time to do a webinar again on hedging downside risks and or profiting from a further decline.
We will be hosting a webinar on Tuesday, April 1st at 4 p.m. Eastern Standard Time. Listeners can find the link in the Research Roundup email and or visit the homepage of bigpicturetrading.com.
All right, Eric, let's move on to the dollar. Tuesday saw our first close above 104 on the Dixie in weeks, but then we closed just below it again on Wednesday. Now, if we can get back above 104 and stay there for a few days, it's a likely tell that we might be headed back up to test 106.
Eric, it's amazing how much a one month difference makes. Like when we go back a month ago, we were trading above two year trade range. It was looking like just a consolidation wedge for a continuation pattern. There was even an attempt to break back above the 50 day moving average. Everything still looked like it could turn bullish.
Suddenly, over the last three, four weeks, not only we had a violent breakdown deep into the trade range, but the rallies have been incredibly weak.
And so under this kind of an environment, the price action has dynamically shifted, very distributive in nature. I'm taking a very neutral stance here on the dollar. There is vulnerability that we could go to the bottom of the trade range near 100 and or even bounce back to 105. But overall, the idea of a big bullish impulse higher on the dollar has quickly closed and
And so at this juncture, I would be taking a very neutral stance. And I think that new trends will develop later in the year. But right now, let's just see how high this bounce goes and whether it triggers another round of selling to go test the 100 level down below. All right, Eric, let's move on to oil.
Well, overnight, we were above the 100-day moving average on WTI at 69.71, but as of recording time, we're back below it again. As Lynn Alden said last week, we're in a headline-driven market, and to be sure, the headline that the Trump administration was demanding that Iran shut down all of its uranium enrichment, including both civilian and military, well, that definitely helped us get here to this elevated level. So,
So is this the beginning of a gigantic upside move driven by geopolitical escalation that could take us into the 90s? Or is this a blip that's soon to retrace back down to the 52-week lows around 65? Well, that's going to depend on those headlines. If there's further escalation with Iran, anything's possible to the upside. Absent further escalation, I think we probably drift back down to the lows, if not lower.
Well, Eric, we were long overdue for a bounce and we finally got it. The 69 to 70 level was a bare minimum bounce. Not only was that the 50 day moving average, but it was the Fibonacci retracement zone of the last leg of the selling in late February into early March. Now that we've gotten here, we could obviously still tack on even a few extra dollars to retest the top of the trade range that was established in the fourth quarter of last year, which was about a $72 high.
But I think that there's going to be a little bit more work in the months to come for oil to truly establish the basing accumulation pattern that would allow a new bullish move. And so I think it's too early for one to already speculate that a new bull run is underway. And it's definitely an oversold bounce. Now we want to see whether accumulation starts to show itself.
All right, Eric, let's touch on gold here. Well, first of all, there's a little bit of an illusion on the chart. If it looks to you like this was a gigantic up week for gold, it's probably just that you're seeing about $30 of contango that was captured in the contract roll. So that's not really an increase in the spot price of gold. It's just the matter of the June contract, which we've rolled into being at a higher price all along than the April contract that we just rolled out of.
Moving on to the technical indications, after gold was signaling an extreme overbought condition on the daily chart, the slow stochastics are back below overbought levels and pointed down now, suggesting either a continued consolidation or an outright move lower might be in the cards short term. Meanwhile, the RSI is still overbought, but it's not extreme overbought like it was a couple of weeks ago.
On the weekly chart, however, the RSI is still well above 80, a sign that we could see a prolonged period of consolidation or lower prices in the intermediate term.
Now, I'm definitely not calling a top here. Patrick, as you've said, there could easily be another $200 of upside from here before this market hits a cyclical top. But I do think the near-term bullish argument is getting less and less compelling as we've already come so far so fast to the upside.
Longer term, I remain extremely bullish, but for now, I'm continuing to take profits, raising cash earmarked for buying uranium at lower prices, which I think are still to come.
Eric, on page 11, I have that chart on gold and what a beautiful bull trend this has been. Just higher highs, higher lows. The accumulation is very distinct. Now, obviously, some people are trying to highlight that it seems a little overdone and overstretched. I get it. But there is zero sign in the price action and the measured moves are still targeting levels as high as $3,200 on the interim.
And so anticipating gold will be able to continue this short-term trend makes sense. Often, something that has this strong of a bull trend can often end with some sort of acceleration to the uptrend.
side as it really becomes incredibly popular. It'll be very interesting to see whether we start seeing almost an acceleration in this or whether it gets very heavy to create topping formations that show some form of exhaustion. Right now, it's a very clean bull trend and neither of those are very evident.
Eric, last week we had a little bit of action in uranium as it bounced from some very oversold conditions. What are you thinking here? Well, last week I cautioned that while I'm uber bullish intermediate to long term, the daily stochastics and RSIs were climbing back up toward overbought territory. Well, they got there this week. They peaked and as anticipated, they turned back down. So now I think we're in the midst of another swing lower. The
The big question is whether we put in a new lower low below the early March low, or if we put in a double bottom, or if this next swing that we're seeing beginning right now over the next couple of weeks takes us down to a new higher low, which would be the first higher low in this cycle. Which of those things we get next is going to be a strong tell on whether or not the bottom is actually in.
But I continue to also feel that the real risk here is a broader market risk event. If we get into a full-on cyclical bear market in equities and with that potentially topping out dead cat bounce that we saw just above the 200-day moving average, if that doesn't recover pretty soon, it might mean that uranium miners are about to follow the broader stock market down to new considerably lower lows.
If all of that happens, I'll be eagerly adding to my already overweight longs, but there's definitely room for a whole lot more downside here if the broader market tanks.
Now, I have no idea how long this route will last, but I'm supremely confident that these are already bargain prices on uranium mining shares. And I'm hopeful, actually, that even better bargains are likely still to come before this is over, because I'm definitely keen to buy more uranium here, especially if we can get down to new lower lows.
well eric while i have a very bullish view on uranium in the long term from a fundamental perspective over the last few months we've been very clear identifying that this has been trained below the 50-day moving average in a distributed pattern lower highs lower lows but the thing that we observed over the last few weeks is that the downside momentum has dissipated there's almost an exhaustion of selling where there's just no more marginal sellers on the downside
On the Sprott Physical Uranium, this looks like around the $20 level, a logical place to find support lines.
But it's going to take a while, very similar to oil, for this to start turning bullish. We have to see bottoming formations, accumulation patterns, anything that shows signs of life. This bounce right here, while it's nice to see from a very oversold condition, is still very premature to start to jump to conclusions that some new bullish impulse is underway. Finally, Eric, a
We talked about the idea of how copper is trading in the COMEX versus the LME. What are your thoughts here? Well, Patrick, I was musing on your comments last week about the mystery of why there's been so little participation by the mining stocks in this massive rally in copper futures on the COMEX futures exchange.
The reason I emailed you to suggest that we include a chart in the deck this week showing both COMEX and LME pricing on copper is that I think the massive rally in COMEX copper futures happened because that HG contract is for U.S. delivery. And I think that what it's reflecting is traders front-running Trump's tariffs on copper imports.
So for those who are convinced that the Trump tariffs won't last forever, there's a really compelling pairs trade setting up here, but only if you're willing to ride out a possibly wild ride on the J curve of pain before profit.
Now, if tariffs are not forever, then this huge disparity between the LME and COMEX pricing won't last forever either. That means you could put on a levered pairs trade going short COMEX copper futures and long LME copper in the same notional principal amount on the rationale that they have to reconverge eventually after this tariff situation blows over.
But, of course, the risk is getting shaken out of that trade if this disparity gets even wider between COMEX and LME prices. And let's face it, folks, if there's anyone on Earth who could create more disparity, for lack of a more polite word, between the U.S. and Europe, President Trump is my man.
Well, Eric, I actually agree with you. This is clearly something to do with the tariffs. But one of the things is that when something gains this much momentum on the upside, it can actually still keep going. And while it seems like a logical trade to fade tariffs,
the COMEX copper against the long on the CME, it will inevitably mean revert. But on the very short term, I don't want to stand on the railroad tracks trying to call the turn point here. This could last still weeks more and the divergence can actually continue to widen. And so I'd be cautious, but it really does show that...
when you're looking at these copper stocks and copper prices on an international basis, that the momentum hasn't really accelerated outside of that COMEX contract. Folks, if you enjoy Patrick's chart decks, you can get them every single day of the week with a free trial of Big Picture Trading. The details are on the last pages of the slide deck or just go to bigpicturetrading.com.
Patrick, tell them what they can expect to find in this week's Research Roundup. Well, in this week's Research Roundup, you're going to find the transcript for today's interview, as well as, again, a link to that special webinar I'm doing on portfolio hedging. You'll also find this chart book we discussed here in the postgame and including a link to a number of articles we found interesting. So you're going to find this and so much more in this week's Research Roundup.
That does it for this week's episode. We appreciate all the feedback and support we get from our listeners, and we're always looking for suggestions on how we can make the program even better. Now, for those of our listeners that write or blog about the markets and would like to share that content with our listeners,
Send us an email at researchroundupatmacrovoices.com and we will consider it for our weekly distributions. If you have not already, follow our main account on X at Macro Voices for all the most recent updates and releases. You can also follow Eric on X at
at Eric S. Townsend. That's Eric spelled with a K. You can also follow me at Patrick Ceresna. On behalf of Eric Townsend and myself, thank you for listening and we'll see you all next week.
That concludes this edition of Macro Voices. Be sure to tune in each week to hear feature interviews with the brightest minds in finance and macroeconomics. Macro Voices is made possible by sponsorship from BigPictureTrading.com, the Internet's premier source of online education for traders. Please visit BigPictureTrading.com for more information.
Please register your free account at MacroVoices.com. Once registered, you'll receive our free weekly Research Roundup email containing links to supporting documents from our featured guests and the very best free financial content our volunteer research team could find on the internet each week. You'll also gain access to our free listener discussion forums and research library.
And the more registered users we have, the more we'll be able to recruit high-profile feature interview guests for future programs. So please register your free account today at MacroVoices.com if you haven't already.
You can subscribe to Macro Voices on iTunes to have Macro Voices automatically delivered to your mobile device each week free of charge. You can email questions for the program to mailbag at macrovoices.com and we'll answer your questions on the air from time to time in our Mailbag segment.
Macro Voices is presented for informational and entertainment purposes only. The information presented on Macro Voices should not be construed as investment advice. Always consult a licensed investment professional before making investment decisions. The views and opinions expressed on Macro Voices are those of the participants and do not necessarily reflect those of the show's hosts or sponsors.
Macro Voices, its producers, sponsors, and hosts, Eric Townsend and Patrick Ceresna, shall not be liable for losses resulting from investment decisions based on information or viewpoints presented on Macro Voices. Macro Voices is made possible by sponsorship from BigPictureTrading.com and by funding from Fourth Turning Capital Management, LLC. For more information, visit MacroVoices.com.