We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI Superpowers for Frontend Developers, with Vercel Founder/CEO Guillermo Rauch

AI Superpowers for Frontend Developers, with Vercel Founder/CEO Guillermo Rauch

2023/8/31
logo of podcast No Priors: Artificial Intelligence | Technology | Startups

No Priors: Artificial Intelligence | Technology | Startups

AI Deep Dive AI Insights AI Chapters Transcript
People
G
Guillermo Rauch
Topics
Guillermo Rauch: Vercel是一个网络基础设施公司,致力于简化开发者部署动态网站和应用的过程,并专注于前端开发。公司创立的初衷是为云计算创建类似于iPhone或MacBook之于个人计算的开发者平台,简化开发者部署应用到全球网络的过程,并专注于前端开发,因为前端是公司与客户互动的关键环节,能够加速网站转化率、注册量和销售额。Vercel的AI战略旨在简化AI技术集成到动态网页应用中的过程,将AI视为自动化开发过程中不需要处理的部分的又一步。大型语言模型是新的后端,将为最令人兴奋的前端工程应用提供动力,Vercel的AI SDK旨在简化AI应用的创建过程。Vercel的Edge Functions产品允许在尽可能靠近用户的边缘运行计算,这对于需要流式传输内容的AI应用至关重要。未来,创建有价值的AI原生产品并不一定需要训练或微调新的模型,关键在于解决现有问题并利用AI进行产品设计。Vercel使用自己的平台构建Vercel.com,以推动自身产品改进并优化工程师的生产力。Vercel正在利用AI SDK探索自动化前端工程师日常工作(如创建表单、UI和布局)的可能性。 Sarah Guo & Elad Gil: 就AI开发者工具缺乏足够的监控和可观察性工具,特别是针对新的AI框架和监控对象,以及AI领域的监控、测试和可观察性反馈循环至关重要,即使在早期阶段也需要等问题与Guillermo Rauch进行了探讨。

Deep Dive

Key Insights

What is Vercel's core mission?

Vercel aims to provide frameworks, tools, and infrastructure for companies to deploy dynamic and ambitious websites, focusing on the frontend as the most critical aspect of customer interaction.

Why did Guillermo Rauch start Vercel?

Rauch wanted to create a developer platform that made deploying ideas to the global web easy, focusing on frontend engineering as the key to customer engagement and business success.

How does Vercel integrate AI into its offerings?

Vercel has developed an AI SDK that allows developers to easily create AI apps by connecting to various AI backends like OpenAI, Hugging Face, and Replicate, focusing on ease of integration and performance.

What are some of the AI-powered products Vercel offers?

Vercel offers the AI SDK for AI app development, Chat and Prompt Playground for LLM performance testing, and Edge Functions for running compute close to the user, enabling streaming content for AI applications.

What challenges does Vercel address with its AI tools?

Vercel addresses challenges like bot mitigation, abuse prevention, and security for AI integrations, providing tools for rate limiting, bot detection, and caching to protect AI-powered applications.

How does Vercel see the future of AI integration in web development?

Vercel believes AI will automate many frontend engineering tasks, such as creating forms, UIs, and layouts, making the process more efficient and personalized while maintaining familiarity in user interfaces.

What role does observability play in Vercel's AI strategy?

Vercel emphasizes the importance of observability in AI development, noting that monitoring and feedback loops are critical for maintaining quality and security in AI-native applications.

How does Vercel handle the issue of bot detection and mitigation?

Vercel provides tools for developers to detect and mitigate bad bots, including rate limiting, bot detection technologies, and caching to reduce the cost and security risks associated with AI integrations.

What changes does Vercel foresee in web architecture due to AI?

Vercel predicts a shift from static to dynamic architectures, increased personalization, and more reliance on AI for content generation and user interface design, driven by the need for faster and more personalized web experiences.

How does Vercel see the role of frameworks in the future of AI development?

Vercel believes that while current frameworks like React, Svelte, and Vue will remain dominant, AI tools will evolve to generate code more efficiently, potentially reducing the need for extensive dependency management.

Chapters
Guillermo Rauch, founder and CEO of Vercel, discusses Vercel's AI strategy, focusing on making AI development easy for frontend engineers. They created the Vercel AI SDK for simple AI app creation and offer AI templates.
  • Vercel's AI strategy is frontend-first.
  • Vercel AI SDK simplifies AI app creation.
  • Focus on ease of integration and creating valuable AI products.

Shownotes Transcript

Translations:
中文

So much of the web runs on Vercel, and now it'll run on Vercel and AI. Elad and I are super excited to welcome Guillermo Rauch, the founder and CEO of Vercel. It's one of the most popular developer front-end framework companies and is widely used by Adobe, Okta, eBay, and others. We unpack the company's AI strategy, what's next for the web, and more. Guillermo, welcome to No Priorities.

Thank you. I'm excited to be here. Just for people who are not super familiar with Vercel, can you give us a quick explanation of the company? Yeah, you described it well. We're basically a web infrastructure company. We provide the frameworks, tools, infrastructure, and workflows for companies to deploy the most dynamic and ambitious websites on the internet. So we power anything from the technology behind ChatGPT, in fact, it's powered by Next.js, our open source framework,

to websites like UnderArmor.com or Nintendo, where we provide the infrastructure to serve all their traffic and help them iterate on their web presence. And what's the sort of founding story of this? Technically, I started it at the very, very end of 2015. But I kind of settled on the idea and launched some of my first prototypes at the beginning of 2016.

Yeah, it's hard to imagine even not existing now. At the time, what drove your belief that this was a different defensible product than the incumbent clouds? So there's an interesting duality in me. On one hand, I'm basically a missionary of the web. I want the web to win. I want open platforms to win. I want developers to win.

On the other hand, I really love Apple and companies that invest a lot in design and integration and making things really easy. So in many ways, the inspiration was, can we create a developer platform that does for the cloud what maybe like the iPhone did or the MacBook did for personal computing? And at the time, I had just sold my company to Automatic, the company behind the WordPress company.

So I had this idea in my mind of just making it really, really easy for developers to deploy an idea to the global web and to start focusing on the front end, which is sort of my strength. I've been a front end engineer for like the vast majority of my life. There's always been sort of this, you know, almost like disdain in engineering for front end is like the last thing you worry about.

But we've kind of turned that upside down and we've made the case that frontend is the most important thing that your company has because that's where you meet your customer. That's where you can accelerate your website to drive more conversion, more signups, more sales. So I wanted to also create a company that focused on this last mile of end user experience and

And kind of work backwards into, you know, all the integrations and backends that you need to bring in to create a full stack application. And that's what Vercel has become, basically. To me, it's a portal into the web and into a new way of building software. Speaking of working backwards then, like, when did you begin to think about AI and get Vercel into it?

So I've actually been a fan of AI for many, many years. As an angel investor, I was one of the early investors in companies like Scale.ai.

To me, AI is just another important step, huge step, of course, but another important step in this idea of like automating all the parts that we don't want to deal with when we're in the pursuit of a creative endeavor. And, you know, it's very true to the spirit of Vercel that incorporating any backend, any new technology into your site

especially into dynamic web applications like the ones we power, should be really, really easy. And the other insight for me was, it's clear that a lot of these AI foundation models almost feel like cloud 2.0, where tremendous SaaS businesses have been built on top of companies like AWS. Snowflake, I think you all had their CEO as a guest recently. Snowflake is a good example of

Maybe you don't need to reinvent all of the infrastructure. You can create a great cloud-native company. My new insight is there's going to be a lot of great AI-native companies that are built on top of this new infrastructure. Let's call it like Cloud 2.0, which is this foundation models. These are the new backends that are going to power the most exciting front-end engineering applications.

And to that end, we created this Vercel AI SDK that is now powering a ton of different startups. We just heard about a bunch of awesome companies that joined their AI accelerator that are, a lot of them are being powered by this SDK. It's basically the easiest way to create an AI app without having to reinvent the backend wheel, right? Like you can connect to OpenAI, you can connect to Hugging Face, Replicate. So we're really focused on that idea of

ease of integration and really the easiest way to put AI into the hands of users and creating actually valuable products. I always advise the team and folks that I work with, I'm not for random acts of AI, like just like, you know, checking a box, but creating really useful products. And I'm a big believer that that last mile of integration can be where a lot of the value accrues. Can you talk a bit more about what sort of products Vercel offers on the AI side? I know you have the AI SDK.

which is a development kit for AIOps. You have Chat and Prompt Playground, which shows performance of various LLMs. It'd be great to just hear more about the different tools you have and what companies have started using them and how. So I'll also go into some crucial infrastructure advantages that Vercel brings to the world. One is

We have this product called Edge Functions. We allow you to run compute as close as possible to the user. A lot of these AI applications require this idea of streaming content to the end user. So when you type in something, if you just sit there waiting for the server to respond, this is quite a new thing on the web, right? Like most e-commerce websites, most backends are sort of optimized for responding within 100 milliseconds.

AIs can sometimes take like 15, 20 seconds to actually, you know, fully bake a thought. So a lot of our infrastructure in this Edge Functions product is sort of empowering these like long sessions of dynamic streaming of responses from as close as possible to the visitor. So the edge of the network. And this has actually played a crucial role in sort of

making apps that integrate with AI not just really easily, but that the performance of them is really good, the user experience feels really good. So if you go to the RAI SDK, we actually show you this is what the application would feel if you just use like a traditional backend and it's blocking. This is what it feels like when you're leveraging these streaming technologies.

So the SDK currently also plugs into all these sort of text-oriented LLMs, but we're planning to add voice, audio, image generation, sort of to bring more tools into the toolkit of front-end engineers.

In the Vercel template marketplace, we actually have a lot of different apps. Some of them that have already even gone viral, like Room GPT, where you can sort of redesign your bedroom or your living room by using a image generation model. So that shows you how you can take an open source model

I believe in that case is hosted by replicate.com. And you can sort of create an application with a turnkey subscription model, sort of log in and sign up and deploy it in basically seconds. So a lot of what we're doing is just putting AI into the hands of as many developers as we can.

Where do you think this all heads? So if you think ahead on the roadmap or strategies or anything else you can share in terms of future products or future things that you're going to be releasing? I'm a big believer that folks have still underexplored the integration side and just creating new AI native products.

you know, for entrepreneurs or startups that are listening in. I do believe that you don't have to train or fine tune a new model in order to create a legitimately useful product. I've been just looking at startups like Jenny.ai where they went from a million in ARR to 1.5 million ARR over the past two months on creating a very specialized product to assist researchers in writing research papers.

And so I think a lot of what you're doing there is creating the right product from the point of view of what is this problem that's already existed? What would it look like to solve it? If now I have AI as sort of my input

input in the design space. And I think that's a radically different way of thinking compared to, I'm going to add AI to an existing product. I'm going to add AI to a word processor. So I think there are a lot of exciting avenues to explore in that direction. A lot of the productivity tools that I use on a day-to-day basis could certainly benefit from being rethought from the ground up. And our perspective at Vercel is, start with the front end, start with the AI SDK, which saves you a ton of time in the AI integration side.

One of the big things that we believe at Vercel is that we're going to build the best possible products if we're customer zero of our own products, right? So we build the entirety of Vercel.com using the Vercel platform itself. It pushes us to make a better Next.js. It pushes us to make better infrastructure. It pushes us to make the builds of our websites faster because we've dramatically increased our engineering headcount and we want to optimize for their productivity and so on.

So on the AI side, we want to do the same thing. We're starting to think about if you had the ability to automate a lot of the work that front-end engineers in particular do on a day-to-day basis. Creating forms, creating UIs, creating layouts, a lot of this is almost like it's statistical in nature. The expectation of a good user interface is that it has to be familiar, right?

It can be completely a, you know, pursue your own journey every time you sit down to create a new front end. So we're basically dogfooting our own AI SDK to think about the next frontier of automation and generative AI, but apply to the domain that we know really well, which is UI and front-end engineering.

I know that you're a very beloved company in terms of developer adoption. And, you know, I think it's one of the most popular developer-centric companies in the world right now. What do you think is lacking from an AI developer tooling perspective more generally? There is a layer of instrumentation that I think is really critical.

Typically, when you look at the successful monitoring and observability companies of the Cloud 1.0, I'm going to use Cloud 1.0 and Cloud 2.0 to denote the new AI-native wave that we're seeing. If you look at Cloud right now, a lot of the best products in the observability space were born out of

we understand what frameworks and primitives you're using. We're going to integrate extremely well with them. I remember the first time I used Datadog, I was blown away because the onboarding process was so well-tuned to, hey, let's not let you move on from the onboarding page until you've sent us a data point. And instead of giving me like a not-so-familiar way of sending them data, they sort of enumerated all their integrations.

I actually just checked out Zapier onboarding from Skype. I'm just an onboarding diehard. Zapier is one of our customers. They run all of Zapier.com on Vercel. And they have the same thing. It's just so awesome. Like you sign up and then they take you to like, let's tell us what software you work with. Tell us what integrations you work with.

And it wouldn't even let me click on the Zapier logo. I was so deep in the funnel. It was beautiful. Datadog does the same thing for sort of, oh, here's Kubernetes, here's Next.js, here's all the things that you already know. I don't think that that's fully landed for AI and the new sort of topology is different. The frameworks that you use are different. Of course, there's the AI SDK, there's LangChain.js, there's a ton of new frameworks, right?

And the things that you're monitoring are different as well. Yeah, there's a few different nominees, I feel, that are starting to work in this area, tackling different pieces of what you're saying. And to your point, it really feels like a very...

active area of developer tooling that's being developed right now. So yeah, it's really cool. I definitely think the overall, let's say, monitoring, testing, observability, feedback collection space is really nascent but important. Really exciting. I think in Cloud 1.0, it's almost like a nice, of course, you need observability to ship and maintain and evolve a production-grade application. It's like letting you provide a great quality of service,

But in the AI realm, it's just so mandatory. Like your v0.1 already needs that critical feedback loop. Whereas I think maybe some engineers that are moving fast in the early days of a startup may be more lenient with how much they observe their endpoints and so on. So the other hot take that I have is I think a lot of the early engineers

frameworks that we're seeing, the more opinionated frameworks that we're seeing, they're probably going to have to evolve a lot.

And I think that we're probably going to see a second generation of frameworks that come out of actually building and deploying AI at production scale. I think a lot of the DX tools for AI that have emerged so far are more rooted in, I have to get the job done. I don't know if it's necessarily the best way yet. We haven't really run the application in prod for that long yet.

My insight there is there's probably going to be significant evolution in the frameworks for AI space. And I'm not talking about sort of the training tools, the PyTorch. Obviously, those are very well baked tools.

I'm talking about sort of the last mile, everything that has to do with agents, everything that has to do with indexing and retrieval and more of the novel integrations of AI applications. If you think ahead in terms of where the web is heading,

at least a subset of the interactions on the web are probably going to become agent-based, right? So you'll have an agent that represents you, an agent that represents a company or a product, an agent that represents the government, and you'll basically have your agent go and act on your behalf and it'll just interact programmatically through APIs or other means.

What impact does that have for Vercel? And does that even matter? I think it matters already tremendously. So one of the key investments that we're making is in security products. So when GPT-3 came out and folks were sort of like dying to integrate it and launch it, OpenAI is by far the most popular backend. Like we have sort of aggregated anonymized telemetry and like, what are the backends that our server functions are talking to? And OpenAI is sort of

biggest. What happened was a bunch of folks published, you know, whether it's chat GPT clones or demos or prototypes and whatever. And then sort of the abuse began of folks that wanted free tokens, so to speak, and started like running proxies at scale to basically just, it's almost like extracting intelligence. Like I want free intelligence. I'm just going to write, instead of writing a scraper and let's call it scraper 2.0, I'm

I run a bot that tries to get free GPT-4, basically. So this is still a huge problem, by the way. A lot of products have integrated AI in such a generic way that they've opened up their token. Even if they have authentication in front, they've essentially opened up this token

source of intelligence to the entire internet, including countries in which this AI has already been banned or companies where the use of AI has been banned. So there's definitely a security challenge there that we're giving tools to developers to address, whether it's integrated tools to facilitate rate limiting or

bot detection, and all kinds of technologies also for reducing the cost of deploying this AI. It's like integrated caching of a lot of the open AI responses that are cacheable and so on.

So I think on one hand, we already have that issue already at internet scale around how do you protect your own investment in AI? How do you also potentially protect your own unique IP from adversaries and so on?

The other one, I think, is the one you're calling out that is related to the bot detection mitigation problem, which is how do I actually have a good bot versus a bad bot? And how does a website owner at scale sort of have an understanding of what is the right ratio? Already, we're seeing that a lot of these AI companies are very strict in blocking any kind of bot activity because of the threat of abuse.

So I think we're going to have to continue to find more sophisticated. It's almost like the AI and the counter AI. We're going to need to deploy more and better AI to sort of detect the bad bots and keep them at bay.

while also allowing you to your point, the authenticated good ones that are going to become your agents that represent you in your ability to crawl the web. The other challenge that emerges as well is this idea of like, is my content AI generated content or not? And what does that mean for SEO in the future? The traditional conception of SEO is I'm going to optimize keywords for Google, right?

And I'm going to make my site really performant so that Google crawls it and then boosts my results based on the signals of performance that they've aggregated from visiting my website in the past. There's a world where there is an intermediary to your content that is no longer Google, right? And obviously this world already exists with GPT-4, but there's a cut-off date problem and so on. But now we have folks like Perplexity.

where they're basically real time. So the question that'll emerge is how do I get SEO right

for these retrieval engines. - Do you feel like you have customers that are already working for, or planning for this, or thinking about how to handle it, especially if they're more content-oriented companies? - Yeah, so on the bot mitigation and abuse prevention thing, every single customer that's deployed AI at scale, at any scale, a product that actually works, has already faced this challenge.

And of course, we're continuing sort of, in some cases, you're playing cat and mouse. In some cases, you're just advising the customer on how to implement better protections and better tools and finding that balance of, you know, how do I actually deliver a good experience for everybody while also protecting my business?

On the SEO side, I think mostly I'm just hearing a lot of questions from people, right? Like, is Google still king? Are the rules of SEO still the ones that apply to me? So I think those are the main ones. But again, my perception is there's a lot more people entering the crawling game and in doing this retrieval process. Whereas before it felt like you had to delegate all of that to like Bing or the Google search API.

And I think creating protocols to negotiate content and to make it more accessible and more distributable, it really depends on your business model to a great extent, right? For us, I would love if every single AI gets the most recent Next.js APIs to be correct, which is not the case right now. If you ask ChatGPT how to solve a problem with Next.js, it tells you the solution for 2020. And I would love for that to be the solution for 2023.

So please go and like help yourself to our docs. I can give you whatever format you want. But for other companies, it's going to be a challenge, right? Because they're expecting a different type of content negotiation.

It seems like that's another place where tooling can become really valuable in terms of the ability to understand whether content that's provided in a corpus falls under certain copyright laws or has other issues around it. Or there may be other sort of tools that we increasingly will see from content owners or for content owners in terms of how you actually deal with this on the web. And we've already seen some early days versions of that around ImageGen.

And some of the image generation models and people not wanting certain content included in that. Like Getty Images, I think, famously pulled a bunch of data specifically to avoid this sort of issue or ask people to pull that data. I wouldn't be surprised if the APIs that we're used to today, which are basically, here's your stream of words that answer the prompt, become a duplex stream of the content and the citations, right? Because...

A lot of products actually require it. I might be, and in the future, legally required to log where that content that I gave to a certain user came from. So I might want to give you a little UI component to explore the citations. Maybe you want to hover a part of the text and understand where it came from. Or simply you just want to like

throw it into a log file for future reference? Like what are sources that your users keep coming back to and that are worth exploring more? Yeah, makes a lot of sense. I think this idea that you were talking about of more people getting into the crawling game is a really interesting one. I think we all have some exposure to like search tech and search companies, but it both seems to me like really challenging that

agents, we're going to have more agents, they're going to need access to the web, or many of them to be really useful, right? Google is not going to give you their index. Bing is going to be expensive and like not be up to par on some things, right? And you can also just like imagine technically an index that's just better for an agent to interface with, right? If I'm not trying to serve people, I'm trying to serve an agent. But I guess from recent experience, and you guys would also know this, like,

to have an index, you need ranking and coverage and the web is very, very big, right? So fresh coverage of a trillion URLs is a very expensive value prop. And I would love to see somebody with like smart ideas about if there is some way to go about this problem that doesn't require like full coverage, but maybe some team needs to figure out how to get there. One idea is that you delegate the full coverage to the initial sort of pre-training of the large models, right?

And then you complement it with your own up to date indexing of the sources that are relevant to your domain specific queries.

So I also use a product called findphind.com, also Vercel Customer, where what they do is they really focus on high quality developer results. So when I have a very tactical question about a vendor, it's given me amazing results. And I think there's a version of this where like case text or sort of like any search engine for a particular knowledge worker type

will have that need for this specific crawling. And that makes the web a lot smaller, right? Another insight that I like to share with folks is there's this data set that Google sort of open sources called Crocs Chrome User Experience Report. And it's basically all their anonymized telemetry of the highest traffic websites on the internet

And it doesn't tell you exactly what the rank is, so it tells you by cohort. For example, in the top 1,000, you already have ChatGPT and Character.ai. They're in the top 1,000 most trafficked websites of the public internet, so to speak.

And you can actually notice this crazy power law distribution where, you know, you have the top 1000 websites of the internet, you know, amounting to basically like 50% of page views, especially on mobile is even more slanted than on desktop. So there's an argument for you can create crawlers that target, you know, even if you target the top 10,000, you

you've covered things that most people actually use. Now, in that top 10,000, you also have dark matter of inaccessible internet. But the point stands that you can do a lot of crawling of sort of the open access internet

And going back to the changes that could happen to SEO, you also have this opposite problem of a lot of things that used to be crawlable are no longer crawlable. You have to pay some huge API penalty. Where else do you think web architecture changes overall, given these changes in AI? Or are there other things that you're really thinking about deeply at Vercel relative to all these shifts?

Yeah, there's a huge push for dynamic and away from static in sort of the previous buzzword of Jamstack architectures. It's very clear that content already changed very rapidly. You had your CMS and you had a bunch of people working on your CMS and they push content changes.

And what really didn't work for the web is static generation. Like every time your content changes, we'll rebuild the entire site. And that's what's really created a kind of weird experience for a lot of folks on the web where in order to actually get a change live at scale in 2023, it might take you an hour.

Because there's all these layers of caching, there's this huge build process, there's a lot of static site generation. So a lot of folks and behind Next.js is a lot of this traction of moving from a static to a dynamic architecture.

But now I'm seeing, for example, all the headless CMS vendors add AI capabilities. Of course, you also have the content hubs or content collaboration platforms like Notion also add AI. So if the rate of content change and production continues to increase,

The need for more dynamic infrastructure and architecture is continuing to increase. The other one is just, generally speaking, we're all going to have more access to AI and therefore we're going to increase the amount of personalization on the web, right? So I think we're going to continue to see more of a web that's just for you and also delivered very quickly.

And then is there anything else that you would predict in terms of changes to front end or AI UI that you think is going to come in the very near term? Yeah, there's a really weird meme in front end, which is that front engineers change their tools every weekend or every week based on what framework comes out in Hacker News.

Funny enough, the reality has been the opposite. Like if you actually look at like, what are the Fortune 5000 doing? What's happening at scale? When we crawl that Google Chrome report of like, what are the technologies that are actually being used? Frameworks like React, Svelte, and Vue very clearly seem here to stay. Especially React has sort of dominated at the top of the web. So I actually expect to not see a ton of change there.

And the innovation to switch to like, what are the AI tools that can actually generate that code? A lot of what makes MidJourney so good at what it does is that almost every prompt yields something that's a statistically pleasant piece of artwork to look at. And I think the way that we build for the web will sort of go much more in that direction.

You don't start with the empty canvas every single time, but also crucially that. So when I say don't start with just a blank page and rebuild every element and place every element, like you're a caveman. I think a lot of folks already don't do that. And they say, well,

I use templates, right? Now you have a phenomenon that happens a lot in hacker news and startups, which is every startup has the same template. There's this sort of like, if you're really tuned in, it's this purple-ish thing that has a headline in the middle with some gradient and box, box, box. So you have these two problems, right? Either...

You have to reinvent the wheel hard from scratch and handcraft every pixel, or you have an internet that looks the same for everybody. And I think AI is definitely going to give us the best of both worlds. You're going to get started really, really easily, and you're going to have this sort of stochastic novelty

that AIs are so good at introducing with the ability to refine based on your own taste. So I actually recently tweeted the funny meme of Rick Rubin saying, I don't know how to play music. Artists hire me because of my taste and my confidence in what I like and I don't like. I think I see a world where the product engineer role evolves to become that.

I like this. I don't like this. Okay, let's refine it. Let's reprompt. Okay, this looks too much like the average website. I don't want it. And of course, you can sort of dive more into the code if that's what you need to do to solve this problem. I find that akin to use a lot of image generation tools that still require a lot of heavy editing on top, especially in the video space.

But I see that totally happening to UI engineering. And I think we can do it with a lot of the tools that already exist and not so much significant breakthroughs. One question I have relative to your point on frameworks being reasonably static.

And obviously, there are certain types of programming languages that have also been with us for a while now. JavaScript obviously came shortly after the inception of the modern browser and things like that. Python's been used for a while. There's obviously more modern languages as well that are getting widespread adoption. If you look at the evolution of machine-driven code, so for example, I've heard claims that 40% of the code and repos that are associated with Copilot are being generated by GitHub Copilot versus a person. It's actually being generated by AI.

Do you think eventually human-derived programming languages are replaced by more efficient machine-driven versions? In other words, do we actually have to shift

that basis for the language in which we code just so it becomes dramatically more effective? Or does it not really matter in the context of AI can just generate these things that will compile no matter what, so it doesn't matter? Yeah, I think it's really tricky. On one hand, I believe this is a productivity race and you have to meet the world where it is. And I think part of Copilot's success is that it did exactly that. It met you where you are. I was already in VS Code. I was actually in NeoVim, but they actually shipped a plugin for NeoVim, so kudos to them. And so

sort of incrementally evolve from there. So I believe the figures around that kind of code generation because developers frequently struggle with the liability of bringing in a package. Anytime you take on a dependency on a third party, you're almost basically contaminating your supply chain,

you get this bag of surprises and so on. So in many ways, what's fascinating about what's happening is that there's almost like a return to copy and paste, right? And that was the world of the last 10, 15 years of the ecosystem was the rise of the package manager. We saw this for Python. We saw this for Ruby. We saw this for JavaScript. We saw this with Rust and Cargo. But fundamentally, what we've been doing is copying and pasting

strings of code from the nearest CDN into your computer. And I think in many ways, what's fascinating is AIs are now making copy and paste so ergonomic that do you actually need that package, right? And one thing that's also really interesting is in the UI world, folks have been actually leveraging copy and paste more

than packages because with UIs, it's really hard to design the perfect API that actually allows you to have that creative freedom on top. I kind of touched on that problem where the UI that's really easy to create all looks the same. This goes back even to the days of like Win32, Java Swing, like people would make this tremendous investments into like this UI libraries and then no one would use them.

Because then everything looks the same. But now we're seeing a return of copy and paste. We're like literally the most popular way of creating React UI today, which is called Chat CNUI. The author literally told people to just copy and paste from the web browser into their editors. And that was a breakthrough. There's a great phrase that I love, which is copy and paste everything.

is always better than a bad abstraction. And a lot of the worst code bases are the ones that are over-abstracted. So...

I do believe that AI will help us sort of, again, it's like that idea of the 100x engineer that almost doesn't even need an ecosystem to exist. You just write everything and you know everything. A quick question on what you just said, because there's a number of companies that are focused on supply chain security. So things like SNIC or Socket, where they basically monitor open source packages and say, is there something now nefarious that's been inserted in it?

Do you think that functionality just goes into developer tooling where there's companies like Magic that want to ingest your entire repo and then provide sort of a mega copilot on top of it, right? Do you think that type of functionality just ends up there? Absolutely. Security copilot, right? Like you didn't free this memory allocation. Use after free. I think those already exist, but there's probably a lot of potential to like,

audit whatever it is that you're auto-completing in real time, right? That's another argument for going back to copy-paste, right? Because if you actually own the code, you can optimize and secure the code and you don't need necessarily any of this dependency management and cleanup. Overriding the third-party package is always a pain in the ass, right? Like you have to like, oh, okay, like I can no longer...

use it as is because he has a vulnerability so another thing that we we talk a lot about our cell is monorepos and we build uh tooling for making it really really easy to adopt monorepos called turbo and this also comes from the observation that the largest companies the ones that have written the most successful software on the planet have always worked in massive monorepos they didn't scatter their engineering workforce in like okay welcome to ellen and sarah's startup

We have 100 repos here. So if you want to touch this feature, go to repo 99. If you want to touch this feature, go to repo 38. No, it's like, here's the code base. It just works, right? And most of those companies don't actually depend on, they just don't use the global package managers of the world. First of all, there's too much liability. Second, it's just so easy to copy and paste the code into the monorepo.

And now you've assumed ownership over it. And now you can do much better auditing of the code as well. So there might be, again, a swing back of the pendulum to vendoring and AI generating a lot of this code. And to your points there as well, like now the AI that scans the code base also has an easier time because they have a full visibility of every dependency in a critical path. And then I guess the last question was just around this, you know, machine derived languages or is that a thing?

Yeah, I come back to a lot of what GPT-4 seems to be extraordinarily good at right now is a function of the available data on the internet. So it's really, really good at writing JavaScript. It's really, really good at writing Python. And that's because folks have created a monumental amount of content on those two languages. I don't know how good it is at writing like more of the, not like,

Nim, for example. It's not very good at writing CUDA. Sure. But I think these things are more the question of what happens in four years or five years versus today. Because I absolutely agree with you. Like there's the training set that it uses to basically become performant at certain things.

And so it's really good at things where there's lots of data. It's gotten better at things where there's sparse data and it sort of has to extrapolate, but it's still, you know, the early days of that, right? Maybe it's GPT-6 or 7 or something where you really get this more advanced functionality. But the question is, will that functionality even be relevant? Like, does it really matter to get to that sort of level or layer? One thing that's more immediate that has come up for us is the ability to be very, like,

efficient with your token usage has definitely favored more terse syntaxes. So you're just wasting a lot of time when you output HTML, for example. You can make it more concrete. You don't need all this redundant closing tags and so on. So I definitely believe that AIs could operate in a more pure layer.

of logic that then gets converted and mapped back to whatever problem at hand that you have. We've certainly done already some simplistic versions of that basically to make our systems more efficient. Is there anything else that you want to cover that we should be asking about? No, check out Vercel.com.ai to get started building your own AI apps and Next.js.org to check out our framework. All right, great. Thanks so much for joining us today. It was a real pleasure. Thanks, Garima. Thank you, folks.

Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.