Welcome to Practical AI, the podcast that makes artificial intelligence practical, productive, and accessible to all. If you like this show, you will love The Change Log. It's news on Mondays, deep technical interviews on Wednesdays, and on Fridays, an awesome talk show for your weekend enjoyment. Find us by searching for The Change Log wherever you get your podcasts.
Thanks to our partners at Fly.io. Launch your AI apps in five minutes or less. Learn how at Fly.io. Welcome to another episode of the Practical AI Podcast. This is Daniel Whitenack. I'm CEO at PredictionGuard, and I'm joined as always by my co-host, Chris Benson, who is a principal AI research engineer at Lockheed Martin.
How you doing, Chris? Doing great, Daniel. How's it going today? It's going great. I was just commenting before we hopped on about I'm feeling the emotional boost of seeing the sun again after a long Midwest winter. So...
Feeling good today and excited to chat about all things AI and code assistant and development and all of those things because we have with us Kyle Daigle, who is COO at GitHub. Welcome, Kyle.
Thank you so much. It's so great to be here. Yeah, yeah. It's awesome to have you on. Just even in your comments about how you like to think about the practical side of AI, this is your place. So I already feel a kindred spirit.
I feel very much at home already. Yeah, yeah. Well, speaking of which, I mean, you're, of course, you know, really kind of at the center of a lot of what's going on in terms of code assistance with GitHub Copilot, of course, but you're also, I'm sure, seeing a ton of things out there. I'm wondering if you could just kind of take a 10,000-foot view and kind of, for those that maybe haven't
aren't following all of the things happening with AI code assistance and development. What's kind of like as of now sitting in, what is it, March of 2025, if you're listening to this, what's kind of the state of AI code assistance and how people are kind of generally using those right now? Yeah, I mean...
It's so interesting to see how far I feel like we've come in such a short period of time, right? It was only a couple of years ago when ChatGPT came out, GitHub Copilot came out. And back then, the novelty was sort of like it wasn't going to disappoint you, right? For GitHub Copilot, you know, you would type some lines and it would respond with, you know, a line, two lines, a method, etc. It was going to complete your code. Very similar to, you know,
I'm going to ask instead of Google a question, I'm going to ask ChatGPT and I can keep asking a question. I think what really locked in this enormous transformation then was finding a user experience that was simple, straightforward and didn't need much information.
explanation, right? Like I'm a dev, I'm writing code and it's just working there versus, you know, needing to figure out how to use a tool, figure out how it works in my workflow and kind of go through hours of onboarding.
Fast forward a couple of years, right? Not only have the models materially gotten so much better, but we've found more and more ways to kind of have that similar joyful expected user experience with code assistance. So it's not just really about writing the code in some ways, right? It's not...
about that at all right now. I think that's at the bleeding edge of what we're experiencing with code assistance, where it's much, much, much more about sitting down with a couple of dev friends and saying, hey, I have this idea for an app, but instead of pitching it to your friends, now you're pitching it to
to your IDE. And that's a code assistant is going to jump in and help you get that next step done. So when I look back over this wave and how it went from sort of, you know, cool, but in retrospect, right, a little bit simplistic behavior of, wow, it really knows what I want to write next.
into like the next level of what it's always been like to be a developer, which is I have this idea and now I have to explain it to someone else. We keep finding ways to augment, improve and speed up
what a dev does kind of every single day. And we're at a point now where I think we're seriously starting to blur the edges of like, what is a developer? Um, I don't think we're there all the way to be very clear, but I think, you know, a year ago we were talking about that and it was like, sure. And now it's getting closer and closer to say, you know, well,
Well, what is that distinct need? And that's only really been in a year. And then about two and a half, three years from the start of this journey. And so I think the code assistant category has always been so interesting to me because it's kind of matching how we work. It's finding ways to augment and improve how we work.
not trying to teach us totally to do something completely different, which I think when we zoom maybe from 10,000 feet to 40,000 feet and we look at AI, we're,
the best tools are the ones that are just helping us do work we're already doing. The tools that aren't the best or having more difficulty finding traction, in my opinion, tend to have to make the human contort to get the most power out of the AI tool. And so because we're devs, we're just kind of iterating in what we know. And that's been the power of, you know, code assistance and the growth of them, you know, over the last year or so, I think.
I'm curious. I mean, that's a, it's a great point you're making there and about the, the changing developer experience and changing so incredibly rapidly. I mean, you know, month by month there are changes in what it means to be a developer now. And so I know, you know, and I'm sure I'm speaking for a lot of people. I like, I keep reinventing kind of parts of my own workflow as I'm doing stuff because new tools become available and what I am doing or not doing is changing constantly and,
It's kind of a, it's both amazingly wonderful, you know, given where we've been over the years, but it's also quite tumultuous. And when we stop, and if I stop and kind of lean back a little bit and have a cup of coffee and think about it, I'm kind of going, maybe a little bit scary in the future about how good it's getting and where that's going. What are your thoughts on, you know, since you've talked about the developer experience explicitly and the user experience of code assistance and that they are going, they are rapidly going so far ahead.
What what kind of of thoughts and I don't even go out a long way. I'm just talking about in the next few months in the short term, like we're how how can we be thinking about adjusting ourselves to an ever evolving state right now as we're as we're trying to think about that, even before we get into the specifics of the tools themselves? Yeah, yeah. I mean, you know.
I think what we've seen at GitHub by rolling out these tools is like, we'll talk to customers or I'll just talk to devs or open source maintainers, etc. And they can kind of fall on this continuum, right? This continuum of I absolutely love every AI tool. I'm going to use every single one and I'm going to try every single one.
And then you have the folks who are like, I'm never touching these things ever. They're terrible and they're going to destroy software. And then there's all the folks in the middle. And so I think the thing that I tend to tell folks is like, you know, just like in our careers, we've all had a moment where a new piece of technology comes in and I feel like
for some reason it's in at least 50% developers minds of like, oh, well that's just a silly thing or that's just a toy or whatever. So I'll just say for myself personally, Rubyist by nature, JavaScript took over and I'm like, oh, JavaScript, Ruby's, you know, blah, blah, blah. And so like over time you grow and you realize, oh, well,
I should really understand that and try it out. And it may not become my new go-to tool, but it would not help me or honestly, like the industry or my peers for me just to be like, I'm never going to touch JavaScript. So I think that experimentation that you were talking about, Chris, is the important thing. I see a lot of devs like try out a new tool or try out a new feature or a new library or a new model.
and then drop back to whatever their floor is, whatever the thing they're most comfortable with, the model they know, et cetera, et cetera. And I think that is the minimum because the change is going to happen just like it's always happened between serverless languages to databases. You pick it right. And if you just don't experiment, my fear personally would be that you do kind of start to get left behind because you don't know
how to reach out to the new tool that is actually excellent and actually helpful. And you're, you know, kind of stuck behind the eight ball learning something that you could have been learning as you go. I will say like, you know, in the next few months, even not even just kind of like now, I do expect way more kind of AI functionality to come outside the editor because
Because if you're, if you're developing software as part of a team or as part of a company, not as a solo dev or a smaller startup, but a bigger group, you, we all know like writing code is a important part of the job, but it's not all of your day, right? You're reviewing code, you're building out a decision records or architecture diagrams or your
debating how to roll this out. You're operating a live site, so on and so forth. I think as AI comes into those spaces to fill in the gaps more and more, again, like you're going to want to have those skills from
you know, figuring out the right way to word things when the AI can't just figure it out or the LLM can't just figure it out on its own. Or again, like every developer know how the system is working inherently. So you can best, best benefit from it. So as long as you're kind of trying these things out, even if you drop back to your baseline, I think you get set up for,
more productivity and I think just kind of like more joy when the AI can take more of those mundane tasks away from you. Again, like I think over the next couple of months, not even the next year. What do you think are the, for those devs, you know, some that have jumped right in, they figured out their workflow. Maybe there's devs out there that are experimenting with the tools. What do you think are those new kind of
Everyone's got their muscle memory of how they develop the things that they like to use. What are the new muscles that need to be developed for AI-assisted coding? The most important ones that you've seen over very many use cases. Yeah, I mean, I think there's kind of two major ones.
Every developer has, like you said, come up with the kind of practices and principles for you personally. We've all worked in systems that have linters and CI and everything that stops you from making mistakes. But there's just also a bunch of things that I like to work in this order. It helps my brain process what's going on. You know what I mean? And so I think on a tactical level,
stating those rules, those prompt instructions, whatever, depending on which tool you're using for this, there's a different name for it. But I do think that that's something just the act of sitting down and writing out
Well, how do I work on this project? Even if you work as a part of a company, you know, how do I care about it? I always wanted to find a schema for the backend API before I implement the front end. And then I go back to the backend or whatever the thing is for you. Writing that down and then letting the tool use that, I think, is a dual benefit, which kind of gets me to my second point.
The big skill that everyone is, I think, trying to work out is it used to be called like prompt engineering. And I honestly think it's just describing a problem. We use so much shorthand and sort of we skip over the details like the hilarious meme of, you know, what the product manager said, what the engineer did, what the designer did. But that is exactly what we have to do with these tools every day. We go build an app that X, Y, Zs and suddenly it comes back and it makes no sense and it's
Yeah, you go, oh, this stupid thing doesn't work, you know, and yes, sometimes it just doesn't work. But realistically sitting down and saying, well, what are the must do's of this app? You know, how do I want it to work? What do I want the flow to be? Whatever those things are, being able to clearly communicate, particularly in a written form is like crucial, right?
in this new era. And I think it's been a skill that in some ways we've kind of let fall down. Like, you know, when I think back to the era in which I was a much more active dev, you know, I think there was just so much written communication, whether that be blog posts or GitHub's always been remote. And so for us, it was usually like a GitHub issue or campfire back in the good old days, Slack these days, just, you know, writing down what you mean.
That's a skill to bring to saying what I want this app to do. And I think that's why when you're on Twitter or X or wherever, and you're looking at, you know, wow, how did this example get one shot? It's like, ask for the instruction. That instruction was certainly not build a game, a multiplayer game that allows me to fly airplanes. Like that was not it, you know, it was much more, but with all the practice that came from describing problems, socially describing problems for your LLM, be
Being able to do that regularly, and I really think it's mainly describing problems as the models have gotten so much better. There's a little less like, how do I make it work for each model than there used to be? That's a skill that's going to serve you both.
in those tools and with your colleagues, with your manager, with your open source friends and maintainers, just cohesively, if you can do it really well. I'm curious to that. Do you think as just as a two second follow up that for developers, kind of that describing a problem skill that you've been addressing along with kind of the communication skills that support that,
Should we think of that as developer skills now? And maybe that is a muscle that we have that we should start exercising as well. Yeah, I think it's something that we...
like the best teams, the best companies have considered that. And I think we've kind of led a little bit of the like 10X developer meme take over and make communication not be as big of a deal. There's no major application site or app that serves hundreds of millions of people or tens of millions of people where being able to communicate what's happening or what the problems are isn't core to the job of being a developer.
And if we just play out over time, you know, if AI and LLMs are going to continue to write more and more and more and more of the code, even if it never hits, you know, all of the code, whatever that ultimately means.
All that's left is collaboration. All that's left is collaborating with your peers, with LLMs, with agents, with designers, with your boss, with your client, whatever that is. And so suddenly the fact that you can write an app incredibly well, succinctly well factored and tested, whatever, that's great. That's a, that's a, that's a great skill too. But the human factor will be, I can look at you in the eye. I can read what you're writing into it, what you're saying, what you're looking for. And,
and describe that in such a way that I can benefit from all of these tools. It's going to be incredibly necessary as those more rote or highly automated tasks can be done by AI tools.
Well, friends, today's ever-changing AI landscape means your data demands more than the narrow applications and single model solutions that most companies offer. Domo's AI and data products platform is a more robust, all-in-one solution for your data. It's not just ambitious.
It's practical and adaptable, so your business can meet those new challenges with ease. With Domo, you and your team can channel AI and data into innovative uses that deliver measurable impact. And their all-in-one platform brings you trustworthy AI results without having to overhaul your entire data infrastructure. Secure AI agents that connect, prepare, and automate your workflows effectively.
helping you and your team to gain insights, receive alerts, and act with ease through guided apps tailored to your role. And the
the flexibility to choose which AI models you want to use. Domo goes beyond productivity. It's designed to transform your processes, helping you make smarter and faster decisions that drive real growth. All powered by Domo's trust, flexibility, and their years of expertise in data and AI innovation. Data is hard. Domo is easy. Make smarter decisions and unlock your data's full potential with Domo. Learn more today at
AI.domo.com. Again, that's AI.domo.com. Well, Kyle, one of the things that I was thinking about the other day was there's a sort of generation of developers that are growing up sort of not having any other experience than having this sort of AI-assisted experience, both on the kind of
like educational debugging IDE side, but also of course using, you know, interesting tools, whether it be kind of vibe coding tools or other things. I was listening to the A16Z podcast and they did like a,
I think it was them. I forget where it was like somewhere they mentioned a survey of the latest cohort of Y Combinator, that cohort of companies. And they were saying like 95% of the code is AI generated. What kind of impacts are on your mind in terms of like,
this generation of coders that are really like, this is what coding is to them. What does that mean for kind of both organizations that are hiring people
developers out of that environment, but also new opportunities that maybe people that wouldn't have maybe broken into developing cool projects or that sort of thing now have opportunity for. Yeah. I mean, I look back on how I personally got started coding and
It was because I wanted to build a video game. And I feel like that's not very unique, but it's one of those things where like I enjoyed playing video games. But it's still cool. Exactly. And I wanted to go build a video game. So back in the day, I went to probably Barnes and Noble and bought, you know, the red C++ book because you had to learn C++ if you wanted to write a video game. And that thing was, I don't know, 650 pages probably. You know, that was an enormous book.
And so that is a huge immediate barrier to entry to like learning because you're like, the reason I came here was to solve a problem. And, um,
If I just do 650 pages of how C++ works, I'll eventually get to build a text-based video game, probably. You know what I mean? And at GitHub with our teams in GitHub Education, I get to work with them and the team there on, well, how do we approach learning in this era in a way where we can bring that problem up front, which is essentially what vibe coding is, right? I want something in the world. I want to go build it.
I think the piece that is necessary to continue to learn is that problem solving piece. And I just want to make it accessible to you so you can bring a problem, something you want to go learn. But in the process of getting you to your destination,
we can just expose you to the ideas around why this application works this way or why there's two files, one for the front end and one for the back end or whatever. So you're kind of learning as you go.
but still focused on ultimately solving that problem that you're going after. So I don't think it's a bad thing that these startups or folks online or even me on the weekend, I'm writing an app that is just for me. It's going to have a user of one in perpetuity. I just want it to get written. I want it to just work.
But if we can help folks learn as they go, I think we'll actually create more, you know, crafts people in a way similar to like I always describe, you know, changing out a light switch in my house. Like if you own a home, we've all probably replaced a plug or a switch.
but there's no way we're going to go into the circuit breakers on our own. We'll probably fry ourselves. And so we call in an electrician to come do that, but I'm not an electrician. I just know how to go change the light switches. And that's what I need in order to solve my problems. That's what I think learning coding and the AI era is going to be, is that you can continue to start from this place of, well, I just want something and that's fine. I think that's great. And it makes the idea more accessible. I want to be able to get you to that, you know, journey person stage of, Oh,
okay, I know how this works. I understand variables. New technology came out. Oh, I want to try to play with that, et cetera. But it's possible that at some scale and speed, we're still going to rely on, you know, professional software developers in perpetuity, running these apps, building these apps, kind of, et cetera. The real thing that's interesting to me about that stat I was talking to some teammates about is I really think there's a huge opportunity in, uh,
operating the apps. And I'm a little dumbfounded that that hasn't been something that's been tackled yet. I mean, at GitHub, right, we kind of focus on like, you got to production and like, okay, great. And then you
use Sentry and PlanetScale and whatever, Azure and so on and so forth to run it. But I really think that in all of our probable life experiences as developers, the thing that bothers you is you get paged. You get an email, there's an error thing, and you're like, crap, what is this thing?
That is another place that I feel like as vibe coding continues, once you run an app and you have thousands or tens of thousands or hundreds of thousands of users, I'm not on team like, oh, well, that's when you got to bring in the serious people. They'll rewrite it the right way. I really think there's still space to just, okay, well, an error came in. The AI saw what it was. It resolved it. It wrote a test. The test passed. It deployed it to Canary or to a small version. And you just get a text message that's like, we fixed it.
That I feel like is the next step of this, you know, era of writing, learning how to code writing and deploying these apps versus deploying them and going, oh, now I need a real, you know, a pro to come in and help me out. That makes so much sense. And are you actually seeing anyone out there kind of early people doing some of this stuff?
Is this in the wild more than just, you know, because we tend to think of AI in terms of writing the code. Operating the app makes perfect sense. Who's doing it? I think the issue here is it will require us all to work together. Yeah.
So, I mean, when I joined GitHub, oh my God, like nearly 12 years ago now, like I joined to work in the ecosystem on APIs and webhooks and how you connect everything with GitHub. And I really, that's where my passion lies. It's in, you know, the hub ecosystem.
You know, of like, how do we get everything connected? And so as I look at, you know, how quickly the industry has gotten so excited about MCP and being able to connect tools together, I'm really hoping this hype wave continues.
drives into something valuable, which will be if I can bring the context of my error tracker, my database, my two cloud services, my email provider, et cetera, et cetera, all together, then I believe it becomes possible for tools to work together to solve these problems.
Unfortunately, right now, each tool is attempting to solve the problem that it can see. And I do not think that's terribly valuable, right? As an end consumer, I don't want to use three AI tools to solve an error in production. I want one.
You know, I want one tool to do that, or I at least want them in some future magical state where agents all actually work together and blah, blah, blah. Then, you know, eventually that could also happen. But I have yet to see a tool that is kind of tackling this, I think, because of the like interdependency problem that a tool like that would have in this current, you know, very quick moving AI tooling state.
Yeah, yeah, I think it's somewhat connected to like my concern around the ease at which all of this can get built is great.
The burden on the debugging side, right, is is potentially potentially kind of growing and sort of you have all this stuff. And then I guess it's more around. Yeah, more around decision support in terms of of like making good decisions based on like these overwhelming pieces of of information because you built just so much stuff.
And you might not have kind of visibility and intuition around that. What is your thought kind of, because as more code is AI generated, there's potentially not a good intuition even on how things are interconnected or like, oh,
oh, this function exists, right? I didn't know that this function existed, right? I've never heard this function name. I have no context there. So, you know, what's needed from a tool standpoint to really get the proper context around that kind of decision support or whatever you want to call it for the developers in the tools that they're working in?
I think, you know, for most of modern, you know, history of software development, I feel like most folks are working in a relatively like highly high level language. Right. A lot of abstraction ultimately. Most of us aren't working in C or even lower than that.
I think that in order to help us understand our code bases or our multiple code bases and multiple systems, right? Like at GitHub, there's no world in which as a developer who works on, you know, webhooks, I'm going to understand how Git systems is ultimately going to work for me. And so for me, I think the piece that I'm trying to
figure out is how can we get more kind of that higher level abstraction of how the code base is working available to me. And it probably needs to be in a way that as a human, I can like understand how that works more so than like,
you know, this class, this file, this whatever, I don't really need to understand that. I need to know that the webhook system is having an issue or this other piece isn't working or there's a bug over here where we process images and then I can kind of click down and dive in and dive in a little bit more. Because usually when you have a bug, even if you do understand the system, your goal is to figure out
what to ignore. You know, like you're like, okay, well, it's not any of this stuff. It's got to be over here. And I do think that similar to, you know, humans being good at describing a problem ultimately to the LLM. I think the LLM has to help us abstract up to a level where, you know,
I would draw on a whiteboard, you know, and then let me double click in and understand more deeply what's ultimately going on. Yeah, yeah, that's a great point. It reminds me of like the sort of peak microservices days and, you know, everything expanded into...
I was at a small company at the time, and I don't know how many microservices we had. And we had alerting set up, right? But then the alert would go off, and everything was dependent on everything else. So all the alerts would go off. It's either none of the alerts go off or all the alerts go off. And then you're like, well, I give up. Where do I even...
hop in here. Yeah, it seems like a big opportunity. I guess in terms of the, you know, and I want to talk about Copilot specifically here in a second, but just in terms of the IDE specifically and at a more general level,
How do you see kind of the IDE? You know, obviously people are trying various things with both what Copilot's doing and Cursor and Windsurf and all of these things, all hands and all of that.
How do you see that interface morphing over time? Do you see that kind of still kind of being recognizable in a year and a half or two years or or being something kind of completely foreign maybe to certain people? I'm hoping that, you know, in the next honestly six months that a
startup just because of the nature of how these things move, you know, can kind of show us a future state that is in some ways backwards compatible. So what I mean by that is like GitHub has had workspace. We kind of demoed spark. All of these are kind of the code is stepping into the background to show me the prompts, the thinking and like a preview of what ultimately is being built and
But right now in IDEs, all the ones you've mentioned, and generally all of them that aren't the sort of like idea-to-app tools like Lovable, Bolt, vZero, etc., they all...
are still staying code forward. And I think it's necessary, you know, in order to attract an audience right now. Otherwise, it kind of you get pushed aside as like a it's a fun toy. It's not really a tool that I'm going to use as a professional dev. I do think in the future, though, I'm working with the app or the, you know, the web app, the actual, you know, iOS app or whatever. Every time I'm writing code, like I'm writing code, I'm writing a test, and then I'm going to go and touch the app.
That last step is usually where I figure out if I'm right or not. And when something's wrong, why do I have to keep bouncing back and forth between the result, the thing I'm trying to actually build and code? And so there's a couple of tools out there now, right, that are kind of showing me the preview. And as I adapt that, like the code is changing.
And it gets to the most, maybe not the most, but one of the most interesting problems to me in this AI era, which is like the magic mirror problem. How do I continuously change a representation and have the code or the text or the readme or the spec match what I'm doing in the representation? So, yes, moving pixels is pretty easy, right? I'm going to change this position or whatever. But what if I ask it to do something completely different? Right.
right? How do I make sure that the code always matches that? And I think there's a couple of really interesting like attempts at that. But if and when models, tech specs, et cetera, get better there, then I think IDEs will broadly be, you know, the prompts, the preview, the thinking. So I can kind of correct and adapt. And then probably some way for me to, you know, click on a part of the app and not go make it blue, which is the,
the demo where that we all see but instead be well no no i want this to be like a dynamic view that shows me this whole other you know basically another control or another view another app or whatever and it'll code it right there and show it to me then i think we'll be even faster than we think we are kind of like right now because instead we're going and manipulating by a prompting you know it'll just turn okay well i'm going to convince you ai to go do this thing but
It feels like we're still a couple clicks away because there's some actual hard problems to solve to let you go back and forth very, very easily because most companies are still working in code ultimately via CI, build systems, deploy, etc. So we want to make sure that everything matches up in the code base, not just in the app or the visual representation of what we're trying to build.
So, you know, as we've been talking about code assistance and where things are going and stuff, I want to get more specific for a moment because we got you here. Sure. Talk a bit about GitHub Copilot specifically and kind of maybe as a starting point on this
kind of talk a little bit about, you know, what the current state of GitHub Copilot is, kind of how the user experience is now and as a starting, you know, toward what tomorrow and the day after is going to look like and how you see that affecting, you know, IDEs, adoption of the technology, the whole thing going forward and kind of start a path into the future from here on that.
on that particular item. Yeah, yeah, for sure. I mean, you know, I feel like most folks are familiar with Copilot 1.0, we'll call it, right? Like everyone's like, okay, so it does code completions and cool. And, you know, in the last six months or so,
We went from the, yeah, it does code completions to, you know, now you can choose to use a variety of models, usually within a day, if not the same day of them coming out. There's chat, you know, the ability to ask these questions. Now there's agent mode available in VS Code Insiders, which allows you to have that experience of,
describing a problem, watching it do the work, asking it to do something else, working across multiple files, the context of your entire repository, not just the file that's open and make these sort of much broader changes to your application in the IDE still. As part of sort of the overall Copilot family, we continue to do these explorations like Workspace and Spark where we're sort of going, like we were just talking about,
What does it mean for me to plan out what I want to build and then let Copilot as an agent go and figure out all the steps that need to be taken across multiple files, multiple repos to ultimately kind of build that app. So the goal, instead of just saying, give me some lines of code or give me a whole method, is now...
starting with, well, what problem are you trying to solve? Most of our devs are working in major open source projects or big companies, or they're starting to learn, et cetera. And so we want to be able to let folks come from a problem. That could be a prompt in chat. That could be a GitHub issue. That could be a pull request that's already open and you think that there's a piece of it that's missing. We want you to be able to just
state what you're looking for, you know, and then let kind of co-pilot take it from there. So we kind of shared a little bit of a, you know, a preview of that path forward where, you know, we've all gotten bugs and we put them in our issue tracker and it's like not interesting. It's going to take a fair bit of time to solve, you know, or to resolve.
And kind of reposing the question, like, why not just assign that to Copilot and let them work just like a dev would work, you know, trying it out, running the test, the test failed, commenting what they think they got wrong, continuing to go and then asking for a human review. That's something that, you know, again, we're trying to model it after that experience of.
anyone on your team versus treating it like this magical tool that's, you know, always going to get something perfectly right. Instead, just like you would explain with another dev friend, you can go in and help Copilot understand or just go, yep, that's totally right. Just change these two things and Copilot will do it and ultimately deploy. So when we're sort of looking at the code creation process, which generally happens in IDEs, I think that's a big part of it.
the part that's in some ways like more exciting for me as a dev is all the other pieces of being a dev. Like I kind of said, you know, like when I'm writing or when I'm reviewing code, um,
I'm a human being. And so I may not remember the exact like method signature of something, but this doesn't seem like the best way. And so to be able to work with Copilot in those moments or to let Copilot kind of just tell me, yo, Kyle, this isn't quite right based on what you know, how I know you work. And so it can show it to me and just let me accept the change or in actions and CI. Why not let it work?
fix the failures that come through or let me define my actions workflow just by talking to AI versus having to go and build it myself. And so, you know, the real kind of magic I think of Copilot over the next year is, you
How can we find moments both in creating code, but also in reviewing it, building it, testing it, deploying it, and let Copilot probably in a much more agent fashion, you know, having a multitude of Copilot agents that can work together and use the context, not just of your code, all the code in your organization, but also the tools that you also use. If Copilot can reach out and get the information from them using MCP or a Copilot extension, then
The
then suddenly it can take over the tasks that you probably didn't want to do in the first place, to be honest, you know, less so those sort of interesting novel, I'm building my business around this tasks. Uh, it'll help you do all those things, but at the very least, let's let it take away the kind of rote pain work that I think, you know, every dev kind of has in their backlog, but it's been sitting there for the last, you know, two years, three years or however long it's, uh, artisanal, uh, now. And so co-pilots, you know, really, really, uh,
trying to allow you to just go from problem to app or problem to fix via these new experiences in the IDE and VS Code in particular. But also now in more IDEs, like we announced Xcode now has chat. A bunch of other editors also continue to have chat. So if you're in those environments, you can still use the power of Copilot.
And then in github.com, you'll see all those new experiences coming in like code review, being able to use an agent to build an actual solution for you from an issue and kind of fix the other 80% almost of dev time inside the SDLC process that they're working in versus only focusing on that editor workflow.
How do you think, I realize this is probably a complex question, but I get it posed to me a lot. So I figure you're probably the best one to answer or at least have an opinion. But oftentimes I get a lot of questions around that.
this side of, I mean, even in, in what you just described kind of, uh, here's an issue, generate a fix, you know, agents that can do this, especially around like the open source community and, and code generation. How, how does this kind of influence, you know, licensing and kind of the ecosystem of open source over time from, from your perspective? Yeah. I mean, you know,
With Copilot and what it's doing, ultimately, that code that is being generated, whether that be generated for your business or for an open source project, we have tools in Copilot that you can basically say, hey, if this matches any public code, don't give me a match. And then it won't. It's not going to match anything from the public code base that it has access to. And so Copilot
In general, for folks that are most worried about, you know, well, where is this code coming from? Is it using code and generating code that looks like other public repos that I don't want to match on? It can do that just by setting a setting. And for some of our sort of
skews of copilot, we require that to be on. You have to have that on in order to protect yourself if there's any concern around, yeah, where is this code coming from? What's the license, et cetera? I think as we continue to move forward more and more, and as we're looking at all the tools out in the market,
As developers, I think we can all kind of intuit that there's only so many novel ways to write the same exact thing. And so you'll sometimes hear, or I should say I'll sometimes hear, particularly from open source devs, going like, oh, well, Copilot won't write this for me. Why won't it give me the answer? And the answer is because that loop that you're trying to build is complex enough that it triggers things
us to look for a match. And because we have that blocking on, because you've turned it on or the business has, it won't give you a return. And so it really depends on the business's personal preference or the user's personal preference on whether they want that public matching to come back to you. But in general, especially as we get into agent mode and we get into the ability to create
close to an entire app, you know, or at least a very complex set of files. You know, Copilot's going to iterate and iterate and give you something that, again, doesn't match that public set if you have it turned off, but ultimately, you know, try to solve that problem for you. Every other tool has a different set of, you know, obligations like this or whether it's going to use the suggestions, et cetera. But I think at the end of the day now,
Our goal is really to make sure that everyone's empowered to use this tool. They can choose, you know,
how they want to use it and what kind of responses and suggestions they want back. And that's why we give Copilot for free to students and maintainers of very popular open source projects. And we're trying to find more ways just make sure everyone can have the tool if they want to use it. Now, Copilot free, basically everyone can use at least a portion of Copilot.
And then kind of let them decide for themselves what they're most comfortable with as we keep going down this AI future of coding. As we start to wind up here, we often will ask guests kind of, you know, what we refer to as the future question kind of going forward now. But we have covered so much ground. I'm going to ask you that. And I will say that as you look into the future and you're kind of, you know, we've covered everything from
in terms of productivity with code, to the developer experience, to the GitHub co-pilot product itself, and a bunch of tangential stuff.
You go wherever you want to go. Where do you think, as you are kind of finishing up for the day and you get through the crush and you have a glass of wine or maybe you're getting in bed for the night and your brain's kind of spinning in open mode, you know, where you're being creative, where does your brain go and where all this is going to go for us and what kinds of things?
might be next that we haven't already talked about? You know, what would you like to see aspirationally coming down the pike? Take us into your brain for this last question. Yeah, for sure. So, you know, if I were a good corporate citizen, I'd be pitching you on something from GitHub, but that's not the honest answer. And we're all developers in some way. And so the people understand.
I think true ambient AI that understands me and has access to my information and what I choose is amazing.
thing I'm most interested in coming right now. You know, I think we've seen the power of the LLM and I don't think we've honestly tapped into the vast majority of it. We're still broadly speaking in chat models and that's incredibly boring to me. You know, I get it and why it's that way. But like, I really think the next step is going to be more about if
If you have all of my emails, my calendar, all the things that I'm currently sharing, that could be my purchases on Amazon. That could be, you know, access to sort of my doorbell camera and you see what I'm wearing on the way out, etc. There's all these experiences where we go to Google and we go, you know,
What's the weather today? Or we ask our assistant, like, you know, a tool at the house or whatever, or more complex, you know, like, when's the last time I what was the last episode I listened to a practical AI? And what was it about? Because I'm going into a podcast recording, and I want to remind them that Matt Collier is a friend of mine, and he did a great job with sidekick and kind of so on and so forth.
That ambient AI or that ambient intelligence where we're not like invoking an assistant, it's just telling me what I need to know when I need to know it because it has all that data about me.
I want it. I desperately, desperately want it. And I think there's a couple of really interesting attempts at this. There was Rewind AI that was a Mac app and they kind of pivoted into this Limitless tool, which is like a wearable plus all the apps that has the same idea. There's been a couple of, I won't name them, but memed versions of this thing. And that's not really kind of what I mean. I really mean the ability to finish my thought because
You have all the context that I need and I didn't have to set up 55 integrations or IFTT or Zapier to move all my data into a single place. So that way GPT-4-5 can answer it or whatever. You know what I mean? And I don't think we're that far off. I think that
I find it incredibly interesting that like iOS and Apple intelligence have been attempting to come up with what they're next up on. But I actually have some hope that they may solve this because they haven't shipped their solutions, you know, and the kind of publicly are talking about how it may take longer than they thought. The biggest gap to this isn't LLMs. It isn't connecting all the data. It's privacy.
I don't want all of this data sitting in an arbitrary startup's cloud or wherever, you know, to do this. For as powerful as all of our laptops are, there's still limits, you know, about how much it can do and how much data it has and what the models it can run, etc. I think someone that can take all the information, do it in a way that I'm personally comfortable with from a privacy perspective, both for me and for anyone that...
is inherently, you know, like getting data sent from them into this tool, you know, like if I was recording my screen right now, for example, to be able to have all that and actually help my day-to-day life in a real way, you know, and reminding me of what's coming up and helping me do those things without the personification of a, hey, Siri or hey, Alexa situation, just,
That's what I sit up thinking about at night and how to crack the privacy nut, because I think that'll be required for us to do this in a way that is both really powerful, but also I think morally correct and safe for all of us to benefit from versus accidentally slipping into an even worse dystopia by letting all this information kind of get out into the wild in a way that we don't want.
That's a great way to end it, Kyle. I also have hopes for similar things. We end on the same wavelength again. Really appreciate you joining. Thank you so much. Thank you so much for having me.
All right, that is our show for this week. If you haven't checked out our ChangeLog newsletter, head to changelog.com slash news. There you'll find 29 reasons, yes, 29 reasons why you should subscribe.
I'll tell you reason number 17, you might actually start looking forward to Mondays. Sounds like somebody's got a case of the Mondays. 28 more reasons are waiting for you at changelog.com slash news. Thanks again to our partners at Fly.io, to Breakmaster Cylinder for the beats, and to you for listening. That is all for now, but we'll talk to you again next time. ♪