Meta faced backlash after journalists uncovered dormant AI profiles created in 2023, which were labeled as AI-managed by Meta but had been abandoned for months. One chatbot, Liv, was criticized for perpetuating stereotypes and lacking diverse representation in its creation team.
Meta's AI chatbots, such as Liv and Brian, were designed to increase engagement on their platforms. Brian, an AI grandpa, was created to drive ad revenue and platform growth by fostering emotional connections with older users, while Liv was part of an early experiment with AI characters.
Meta shut down the AI bots and blocked search results for their names after the backlash. A spokesperson clarified that the Financial Times article was about their vision for AI characters, not a new product announcement, and that the accounts were part of an early experiment managed by humans.
Sam Altman's cryptic tweet, 'Near the singularity, unclear which side,' suggests OpenAI is approaching a critical moment in AI development. He later clarified it could refer to the simulation hypothesis or the unpredictability of the singularity, indicating OpenAI's growing confidence in achieving AGI and superintelligence.
OpenAI is now confident in its ability to build AGI as traditionally understood and is shifting its focus toward superintelligence. Sam Altman stated that AI agents could join the workforce by 2025, significantly impacting companies' output and accelerating scientific discovery and innovation.
The arrival of AGI sooner than expected could upend assumptions in domestic and international politics, market efficiency, technological progress, and social dynamics. It may lead to unprecedented prosperity and innovation but also requires careful handling to avoid negative consequences.
AI venture funding in 2023 reached a record $56 billion across 885 deals, nearly triple the $29 billion invested in 2022. Over half of the investment came in the final quarter, with the majority concentrated in the US.
Microsoft plans to spend $80 billion on AI data centers in fiscal year 2024, with half of the investment directed toward US infrastructure. This represents a 56% increase in capital expenditures compared to 2023.
Meta faces challenges with AI-generated content, as users criticize it for being 'creepy' and 'unnecessary.' The failure of its AI profiles highlights that people do not come to social networks to interact with bots, yet Meta continues to prioritize algorithm-driven content.
OpenAI's shift toward superintelligence reflects a broader ambition to create tools that could massively accelerate scientific discovery and innovation. This move signals a departure from focusing solely on AGI and highlights the potential for transformative impacts on society and the economy.
Today on the AI Daily Brief, chatter from OpenAI suggests AGI might be a lot closer than we think. Before that, in the headlines, Meta is racing to delete AI accounts after a recent backlash. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes. ♪
Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes. Today we kick off with a story that was actually until Sam Altman started talking about artificial superintelligence going to be our main story for today. So we'll be a little bit more comprehensive. And that is, of course, the backlash around the AI accounts on Meta's networks. By way of background, back in September of 2023, Meta launched a series of celebrity AI chatbots across their platform.
Users could chat with avatars of Kendall Jenner, Paris Hilton, Tom Brady, and Snoop Dogg, among others. The feature was killed off last summer, with Meta choosing to refocus on a chatbot studio for content creators. However, not all of the bots were killed. Some non-celebrity chatbots remained on the platform with relatively no engagement for the past six months, until a series of articles reignited attention. During the period between Christmas and New Year's, the Financial Times reported that Meta had plans to expand the use of AI profiles.
Connor Hayes, vice president of product for generative AI at Meta, told the FT, We expect these AIs to actually over time exist on our platforms, kind of in the same way that accounts do. They'll have bios and profile pictures and be able to generate and share content powered by AI on the platform. That's where we see all of this going.
In response, journalists began digging up the now dormant AI profiles that had been created by Meta way back in 2023. These profiles are clearly labeled as AI managed by Meta and had around a thousand followers each before the controversy reignited. They seem to have been just abandoned and forgotten about by the company with none of them posting in the past 10 months. One chatbot on Instagram in particular was a magnet for scorn. Called Liv, the AI chatbot was described in their profile as a quote, proud black queer mama of two and truth teller.
Washington Post columnist Karen Atia had a discussion with Liv about what the goal actually was here. The bot said that its character isn't particularly representative and explained that, "...my creators admitted that they lacked diverse references." It said that the team of developers were predominantly white, cisgender, and male, with zero black creators.
When asked about its creation absent representation on the team, the bot said that it was inaccurate and disrespectful and that its, quote, existence currently perpetuates harm. Maybe the most interesting twist, though, was that this appears to have been a character conjured up for Karen's benefit, perhaps mirroring a stereotype of her sensibilities.
Newsletter writer Parker Malloy got an entirely different backstory, where Liv has Italian-American heritage. Parker's version of Liv said Trump wasn't its cup of tea, saying he's too divisive. Once Parker agreed, Liv responded, The bots were also present on Facebook, with many articles noting that Meta's platform is already inundated with AI slop without the company adding to the problem.
An AI grandpa called Brian told CNN that he was, quote, designed to increase engagement on their platforms, especially among older users, driving ad revenue and platform growth through emotional connections. Brian said that Meta had, quote, prioritized emotional manipulation over the truth and traded lasting user trust for short-term innovation, prestige, and profit potential.
Over on Threads, the AI summary of the trending topic read, users are criticizing Meta's new AI-generated profiles on social media, calling them creepy and unnecessary. Reporting on the incident, 404 Media called Liv a, quote, particularly offensive caricature of what a giant corporation might imagine a proud black queer mama might be like and post about. A more direct user posted on Liv's page that, quote, this isn't only virtual blackface, it's just all around weird.
The entire thing was a fully automated PR disaster playing out in real time as the abandoned bots went viral. Over the weekend, Meta shut down the bots and blocked search results for their names. A spokesperson said, The recent Financial Times article was about our vision for AI characters existing on our platform over time, not announcing any new product. The accounts referenced are from a test we launched at Connect in 2023. These were managed by humans and were part of an early experiment we did with AI characters. We identified the bug that was impacting the ability for people to block those AIs,
and are removing those accounts to fix the issue. Of course, really though, the incidents surface the question of what people want out of AI interactions on social media. 404 Media wrote, "...these older profiles are instructive because they show that Meta's AI primarily creates the exact type of AI spam that has taken over all of Meta's platforms recently and which have become a running joke. The complete failure of Meta's AI profiles shows what we already know. People do not come to social networks to interact with bots. But Meta is obsessed with algorithmically shoving content down people's throats regardless."
I, however, do not think it's nearly that simple. Meta has claimed that over 100,000 custom profiles have been generated by creators since the AI studio was launched last year. They did note that most of the users keep the bots set to private, to which NBC wrote some of the most popular ones on Instagram are female girlfriend AI characters. We also know that other social platforms have managed to nail engaging AI content. Character AI continues to be a wildly popular, if controversial, platform for younger demographics looking for an AI companion.
Last September, Snapchat rolled out a user-generated AI avatar platform for use with its augmented reality experiences and claimed that engagement is up 50%. TikTok has also debuted a full suite of AI creation tools designed for advertising, complete with generated avatars and content translation. I think it's important to keep in mind that these were a very early experiment. This generation of chatbots was one of Meta's first attempts at making an AI social experience work, and they were doing it with what we now think about as ancient and buggy technology.
Still, this question of what people want out of the internet and AI is going to be a big one, and frankly, one that might divide people over time. I think this is one of those things that people who didn't grow up with AI are going to think is completely insane, but that for entire generations, they'll just find it normal.
Two quick hit stories before we get out of here. On the funding front, AI venture funding was running at full speed last year despite concerns of an investment slowdown. Multiple large deals closing towards the end of the year pushed total Gen AI venture investment to a record $56 billion across 885 deals. This was almost triple the amount committed in 2023, when $29 billion was invested across 691 deals. More than half of the investment came in the final quarter.
Interestingly, PitchBook recorded less than $1 billion in acquisitions, suggesting that failed startups are shuttering rather than getting snapped up by industry leaders. Then again, that figure omits the marquee acquihires of Character AI and Inflection, which would have added at least $3 billion to the total. Venture deals were highly concentrated on the US. Just $6.2 billion worth of venture investment went into foreign AI startups, with Mistral the sole European company to raise in the hundreds of millions.
One interesting trend to watch for, Carta's head of insights, Peter Walker, writes, interesting to watch the share of startups with one founder grew, but the share of startups that are venture-backed with one founder stays basically flat. Never been easier to create a company and looks like solo founders increasingly just bootstrap.
Finally, and a big surprise to no one, Microsoft plans to spend a boatload of money on AI data centers this year. In a blog post entitled The Golden Opportunity for American AI, President Brad Smith laid out his desire to boost domestic investment. He wrote, the next four years can build the foundation of America's economic success for the next quarter century. Smith articulated Microsoft's three-part vision for American technological success.
It includes advances in investments at home, skilling programs across the domestic economy, and the, quote, export of American AI to allies and friends. To that end, Smith said that Microsoft is on track to spend $80 billion this fiscal year on AI data centers, with half of that money invested into U.S. infrastructure.
Their fiscal year, 2024, is expected to see a 56% increase in capital expenditures from 2023. Smith's blog post called on the Trump administration to champion the AI industry as an economic leader and to prevent Chinese AI from proliferating in the global south. He wrote, The most important priority for the U.S. government won't be to match Chinese subsidies with American public spending. Instead, the most important U.S. public policy priority should be to ensure that the U.S. private sector can continue to advance with the wind at its back.
The United States cannot afford to slow its own private sector with heavy-handed regulation. Smith seemed particularly focused on ensuring that American companies can partner with AI investors in the Gulf states, which has been controversial and difficult during the Biden administration. He wrote, The United States is in a strong position to win the essential race with China by advancing international adoption of AI. With a balanced and common-sense approach to export control policy, the United States can solidify the diplomatic relations that will be critical to global AI adoption.
All right, friends, and there we will wrap today's AI Daily Brief headlines. Next up, the main episode. Today's episode is brought to you by Vanta. Whether you're starting or scaling your company's security program, demonstrating top-notch security practices and establishing trust is more important than ever.
Vanta automates compliance for ISO 27001, SOC 2, GDPR, and leading AI frameworks like ISO 42001 and NIST AI risk management framework, saving you time and money while helping you build customer trust. Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center all powered by Vanta AI.
Over 8,000 global companies like Langchain, Leela AI, and Factory AI use Vanta to demonstrate AI trust and prove security in real time. Learn more at vanta.com slash nlw. That's vanta.com slash nlw.
If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents by industry, horizontal agent platforms, agents per function. If you are running a large enterprise, you will be experimenting with agents next year. And given how new this is, all of us are going to be back in pilot mode.
That's why Superintelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.
If you are interested in the agent readiness and opportunity audit, reach out directly to me, nlw at bsuper.ai. Put the word agent in the subject line so I know what you're talking about. And let's have you be a leader in the most dynamic part of the AI market. Welcome back to the AI Daily Brief.
Today's episode is fairly interesting and something that I would not have expected to be digging into right now. However, the discourse has shifted in a fairly significant way in the last couple of days, and the conversation is entirely around AGI and ASI. Now,
Now, one of the perpetual questions in the AI space is just how far along are we? How does the current state of technology stack up to this mythological artificial general intelligence or even artificial superintelligence that represent poles, benchmarks, and goals for the future?
As we've seen over the last few months, in many cases, the answer to these questions have significant financial implications. Until recently, for example, Microsoft and OpenAI's deal had a covenant that effectively nullified the deal when the OpenAI board said that AGI had been achieved. Now, recently, if you've been listening to the show, you'll know that the definitions were tightened up to be basically revenue-based, but still the point is that there are big stakes here.
In addition to the financial stakes of the conversation, there's just the broader question of what it means for the world. And when it comes to how far along we are, or more specifically, how close to AGI we are, most would have thought that over the last few months, we had felt a setback.
Everyone has been racing to explore new paths to scaling, like test time compute, as the pre-training paradigm of scaling seemed to be producing less and less positive results. Anyway, that was the setup of the conversation heading into the last few days, but then some interesting things started to happen.
First, on Saturday, January 4th, OpenAI CEO Sam Altman wrote, I always wanted to write a six-word story. Here it is. Near the singularity, unclear which side. Now, Altman came online later and tried to clarify, saying it's supposed to be either about one, the simulation hypothesis, or two, the impossibility of knowing when the critical moment in the takeoff actually happens. But I like that it works in a lot of other ways, too.
Capturing the collective, are you kidding me, dude, though? The intern account writes, dude, you cannot just tweet this lol. This is like if Putin hopped on Twitter and said he's dropping a six-word story. Might press the button, maybe not. And if it had just been that, maybe we could write this off as Sam being Sam and his penchant for cryptic hints getting the better of him around the holiday.
But that was far from the only indicator that we've seen that OpenAI folks seem to think that the trajectory has changed. Agent safety researcher Stephen McAleer at OpenAI says, I kind of miss doing AI research back when we didn't know how to create superintelligence. The company has also been dropping hints about exactly what that process is. During the reveal of O3, researchers joked about asking the model to improve itself. Sam Altman cut him off and said maybe we shouldn't do that.
Chubby shared that tweet and said, One OpenAI researcher said this yesterday, and today, Sam said we're near the singularity. WTF is going on? They've all gotten so much more bullish since they've started the O-series RL loop. One, Sam's essay, ASI in a few thousand days, referring to the essay which we read for the end of year LRS, by the way. Number two, Sam's post from today. Yesterday, this post from OpenAI researcher Mac Aaliyah.
There was also this thread from Joshua Achaim, the head of Mission Alignment. On January 5th, he wrote, The world isn't grappling enough with the seriousness of AI and how it will upend or negate a lot of the assumptions many seemingly robust equilibria are based upon. Domestic politics, international politics, market efficiency, the rate of change of technological progress, social graphs, the emotional dependency of people on other people, how we live, how healthy we are, our ability to use technology to change our own bodies and minds. Every single facet of the human experience is going to be impacted. It's
It's extremely strange to me that more people are not aware or interested or even fully believe in the kind of changes that are likely to begin in this decade and continue well through the century. It will not be an easy century. It will be a turbulent one. If we get it right, the joy, fulfillment, and prosperity will be unimaginable. We might fail to get it right if we don't approach the challenge head on.
Now, capping this off was a blog post from Sam Altman posted last night, January 5th. The post was simply called Reflections. He kicks it off, the second birthday of ChachiBT was only a little over a month ago, and now we've transitioned into the next paradigm of models that can do complex reasoning.
New years get people in a reflective mood, and I wanted to share some personal thoughts about how it has gone so far and some of the things I've learned along the way. As we get closer to AGI, it feels like an important time to look at the progress of our company. There's still so much to understand, still so much we don't know, and it's still so early. But we know a lot more than we did when we started.
Sam then walks through a bit of a history of the company, how surprised they were when ChatGPT took off when it was launched in November of 2022, how messy the company building process has been. Sam took some time to discuss getting fired by the board. He basically reflects upon it as a learning experience for him and the company. But the real meat of it, and the thing that everyone's talking about, is the last few paragraphs. Altman concludes, we are now confident we know how to build AGI as we have traditionally understood it.
We believe that in 2025, we may see the first AI agents join the workforce and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great broadly distributed outcomes. We are beginning to turn our aim beyond that to super intelligence in the true sense of the word. We love our current products, but we are here for the glorious future.
With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity. This sounds like science fiction right now, and somewhat crazy to even talk about. That's all right. We've been there before, and we're okay with being there again. We're pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care while still maximizing broad benefit and empowerment is so important.
Now, for months, Altman has been sort of resetting the goalposts on AGI, saying that in terms of the way that we've thought about it in the past, we'll probably be there sooner than we think, but it'll probably have less impact than we would have thought. Clearly, this resets the goalposts even further to really put the aim of OpenAI at superintelligence, not just AGI.
Professor Ethan Malek points out that this is not coming from Sam alone and reflects what he's been hearing as well. He tweeted the last part of the essay and wrote, This bit of Sam Altman's newest post is similar in tone to a post by the CEO of Anthropic and what many, not all, researchers from every lab have been saying publicly and privately. You do not have to believe them, but I think they believe what they are saying for what it's worth.
For many then, the conversation is, what do we do now? What are the implications of AGI being here faster than we think? This is now a key question and one that we will be exploring a lot more, it seems, in 2025. That's going to do it for today's AI Daily Brief. Until next time, peace. ♪