Welcome to another episode of the Console DevTools podcast. I'm David Mitten, CEO of Console.dev, a free weekly email digest of the best tools and beta releases for experienced developers. And I'm Jean Yang, CEO of Akita Software, the fastest and easiest way to understand your APIs.
In this episode, Jean and I speak with Steve Lee, Principal Software Engineer Manager on the PowerShell team at Microsoft. We start with what PowerShell is and why its object-based approach is interesting, then get into what it was like open sourcing a project at Microsoft back in 2016. We discuss the transition to using GitHub and what it's like managing an open source project at scale, balancing community with features, bugs, and requests from users alongside Microsoft goals.
We're keeping this to 30 minutes, so let's get started. We're here with Steve Lee. Steve, thanks for joining the Console Podcast. Of course, I'm very excited. Let's start with a brief background. Tell us a little bit about what you're currently doing and how you got here. Sure. Currently, I'm the Software Engineering Manager for PowerShell, as well as the OpenSSH port for Windows. I started at Microsoft more than 20 years ago. I actually joined in January 2000.
And I didn't join out of school. I was actually at Boeing doing Unix support for about a year. And it's actually not a super interesting story. It's kind of by chance and by luck that ended up Microsoft. Basically, Boeing has a history of doing layoffs and they sent out an email saying layoffs were coming. So I didn't want to hang around. And I had some contacts at Microsoft doing the Internet Explorer for Unix project, which a lot of people probably don't know existed.
But back in those days, Internet Explorer was like the dominant browser and people still want to use it on... When I say Unix, I don't mean Linux. I mean stuff like AIX, Solaris, HPUX, all the classic stuff. Wow. So I transitioned over that team. And what I didn't know was that that was a very short-lived project. So probably about a year, it was...
pretty much done. And so then I had to look for another team to join. And I ended up on the WMI team. So anyone who's familiar with Windows probably knows what WMI is. If you're not, you probably don't know what it is. But basically, it's an abstraction layer. So you can do all sorts of management of Windows stuff like storage, network compute in an object model based on SIEM.
This comes out of DMTF, Desktop Management Task Force. So for a long time, I was actually part of that as well. One of my only formal publications is actually a DMTF specification on this thing called Physical Computer System View.
which if you guys are interested, we can talk about it. But that's all like history because Sim didn't take off as big as Microsoft thought it was going to take off. WSMAN, which is like a remoting protocol, and WinRM, which is the Windows implementation of WSMAN. I was on those teams when those got conceived. So that was like a long time ago. And then over time, there were like some reorgs within Microsoft, within Windows, where PowerShell team and the WMI team and WinRM team all got combined because they're all management platform pieces.
And so this probably might've been around, I want to say 2016 ish. I'd have to like actually look at my history to figure out the years, but yeah,
So I was trying to think about what's the next thing for me because I was on WMI Winter and stuff like that for a while. And so there were some discussions about, hey, what's the next thing for PowerShell? Because I was not formally on the PowerShell team really at that point in time. And so the discussion is really, all right, if we really want PowerShell to become a bigger thing, we really need to target Linux. And at this point in time, Azure was also trying to target Linux more. I think we must have had or in the process of having a CEO change, right? This would have been from Balmer to Satya Sattia.
And so in order to target Linux, you have to be open source, right? So that was the big discussion. All right, this is a Windows proprietary technology. How do we take it open source? How do we now support cross-platform? And for me, that seemed like very interesting because prior to that, I was working all completely Windows proprietary technologies, right?
for the most part. And so I said, all right, that seems like a good opportunity for me. So then I kind of became the engineer manager for that team. I was a manager already during that point in time. But so this is a new thing for me, exciting. We're going to figure out how to do this
Open source in Microsoft under Balmer was not really something you talked about, right? So that was a big challenge. I think the only other team I'm aware of that really took the open source route within Microsoft at that point in time was .NET themselves. And of course, PowerShell is based on .NET. So we couldn't have done it if they had not been open source as well. So we were kind of chartering new ground. Now, obviously, Microsoft today under Satya is much more open to open source. There's a lot more contribution. There's a lot more use of it. So things have changed a lot. But back in those days...
There's a lot of discussion with lawyers. There's a lot of code reviews of our code base to make sure what we're releasing doesn't open up patents and all this stuff. So it was a very long process. But anyways, to shorten it, it was a very exciting time because we're doing something for me and my team, something completely new.
You know, with Windows, if you want to do a new development, you don't disclose it. You don't talk about it until it actually gets public. Whereas on open source, we talked about future plans before we even started because we want to get that feedback earlier and say, hey, tell us, do you agree? Do you don't agree? A lot of times people don't agree, but hopefully we're still in it with the right decisions. So anyways, that's kind of where I am today is like doing the open source. Of course, over time, I got more than just PowerShell. PowerShell Gallery came to my team.
as well as OpenSSH project. So I have a much bigger portfolio than just PowerShell today.
Cool. Yeah, thank you for that introduction, Steve. I would love to zoom out a little bit for our audience and have you just talk a little bit about what is PowerShell? How did it come to be? How is it different from the Windows command prompt? And just for a little bit of background, I actually used PowerShell during my Microsoft internships. And I was like, this is very cool. I would love to hear more about it. Absolutely. I want to clarify, like I was not on the PowerShell team when I was first conceived, right? So
PowerShell has gone through multiple versions. And to be honest, I don't know the entire history of it. Although one of the original PMs, program managers on the PowerShell when it was conceived is still on my team today. He's working as a software engineer now.
So one of the big challenges they really wanted was, you know, CMD in Windows really came from the old DOS era. It was very kind of like there to do minimal things. And Windows, given the name, was very focused on GUIs. So for a large part of the Windows success is based entirely on GUIs, graphical interfaces,
And over time, people loved it. But there's only so much you can do with a GUI if you need to do a lot of fanning out to multiple servers, for example. A GUI is not necessarily tuned towards that. But also for developers, using Visual Studio is great. But sometimes you can be more productive in a console because you can do directly what it is you need. So they needed a rethink of what the shell for Windows should look like. And if you look at classic shells like Bash and ZShell and all these things,
The pipeline is really primarily consists of just passing text or binary content from one command to the next. If it is text, then you have to do stuff like regular expressions using grip and whatever. They're going to piece out those things. And you hope that the tool doesn't change the format of the text because now it may break your regular expression, right? So one of the fundamental differences with PowerShell is really this object pipeline where you run what's called a commandlet in PowerShell terms, which is an improc.
command, and it's going to output some structured data, typically like a .NET object. You can inspect it. You can use dot notation. So this all should be very familiar with anyone who does programming, which also means that there's kind of like a higher learning curve for non-developers. But the idea is like, you know, you would call something like get service, and service isn't returning just text. I mean, on the console, it gets formatted as text.
but it's actually an object that comes out and you can inspect it. You can pipeline it to another command. You can query against it. You can pipeline those objects directly into another command like to enact action, like stopping a service or something like that. So that makes it very powerful, but it does certainly add some level of complexity because now you're dealing with objects in that straight text. So that's fundamentally like the value proposition of PowerShell. Now, there's other parts of it as well. PowerShell being built on top of .NET means you can call
basically any .NET API, right? So in a regular, to pick on CMD, for example, you're kind of limited by whatever executables are available to do whatever it is you need. Whereas in PowerShell, kind of like VBScript in that sense, or like Python or something like that, you can call APIs. Because it's on .NET, you can call native APIs. You can call exported C functions, for example, to get whatever it is you need. So if someone had not created a command for you, but you know that there's an API available, you can call that within your PowerShell script. That makes sense. And at what point...
do you recommend people going to, without being derogatory, a real programming language? Where does that barrier lie? If you've got almost the same kind of functionality as a programming language, where does that lie? Where should you use scripting versus programming? So to be clear, PowerShell itself is built on what you would refer to as a real programming language. It's primarily written in C-sharp, but there is C parts of it where we needed abstraction from the operating system. So I think the way we position PowerShell, it's really a glue language
and not intended for developing full applications. Now, I do know that there are folks in the community who built very complex systems on PowerShell script and we'll support them by all means, right? But it's not intended for that purpose. It's really for what we use within our team is really like, you know, you're trying to test out some new .NET API. It's actually much faster to write it in PowerShell script with a few lines of code than writing .NET
C sharp that you would have to compile and do that work. Right. So it makes it very easy to test out new things, prototyping before you commit to writing quote unquote, proper development code. Right. There are other like it pros can also, you know, scripting. I think a lot of, I don't think it's necessarily like, you know, black and white. Should you use script versus like, you know, a compiled language, there's different cases where it makes sense. And for like a lot of it pros, like,
They don't want to use a compiler because, you know, with PowerShell script, you don't have to worry about targeting different architectures, different operating systems, because we support Mac, Linux, and Windows and different variants of Linux. And we support, you know, X86, X64, ARM64. You write your script once, it's kind of like the Java promise of write once, run anywhere, right? There are limitations because something's just unremovable on a different system, but
So it's really a trade-off of what you're trying to accomplish and how complex a piece of code. You can certainly write complex code in PowerShell, but for our own team needs, we do write some PowerShell modules, which is like a collection of commandlets. Some of those are written in PowerShell script because it's just faster to do it that way. In other cases, we say, hey, we really need to kind of write this in C Sharp for maybe it's a performance consideration or maybe there's other maintenance considerations because the underlying APIs is simpler to do it in a C Sharp. So...
Great. So Steve, I'd like to go back to the part of the conversation where you talked about open sourcing PowerShell and dig in a little bit. What was involved in that decision? Because as you mentioned, that was not the typical decision for a Microsoft product at the time. And what was that process like? That actually took a while. I don't remember the full timeline. I want to say it took us probably about a year of actual work
to get it ready to open source before we announced it. And when we announced it, it wasn't like completely ready anyways. So there are a couple of things. Windows at that point in time and Microsoft was not using Git.
Right. They're using a proprietary source control system. So one part of it was really moving everything into Git repo because we knew we're going to put it on GitHub. And also, we need to educate all of our engineers how to use Git properly because the way Git works was very different from the proprietary system that we had within Microsoft at the time. Right. So this is the mental aspect of how to do it right versus just trying to
You know, stick a square peg in a round hole kind of situation. So it took some time to really understand the proper branching and all that, because, again, Windows did it very differently because Windows is a massive project. So that's one aspect is just learning about the new toolings, you know, moving off of proprietary stuff, moving to open source public stuff so that community contributors can actually contribute. Right. So if we stayed on proprietary stuff, they can't use it.
So that's one aspect. Another big aspect was really the tests. So in Windows, the proprietary code base, we also have proprietary test frameworks. And we had no intention of open sourcing those things for a couple of reasons. One is we didn't want to support it. Two, we had over time because PowerShell was a multi-year project, there's different frameworks developed by different people, different tests.
And so it's like, this is a tough decision at that point, but I think it was the right decision. It's like, we're going to actually write all new tests using an open source framework called Pester. That took a long time. And we did leverage some of the existing test knowledge and test data. But essentially, we just wrote all new tests. And I think we're better for it because a lot of people who write tests now are better at writing tests than when they were first written, right? Like when we did some analysis, it wasn't like a call that we just made. We did like a whole bunch of code coverage analysis and say, all right,
We have tens of thousands, potentially hundreds of thousands of test cases in Windows proprietary. And we wanted to see how many of these actually are duplicating, overlapping the same code paths. And it turned out that there's a lot of it, right? So even if we ported it, we're not getting a ton of value because now we're running a lot more tests that are just covering the same area, right? So it's going to take a lot more CPU and time and stuff like that.
So that ended up being, I think, a worthwhile decision, but it was certainly one that gave management pause, right? Like this was a big effort. And of course, being proprietary code, we had to go through extensive review, not just with legal, but also within the team like,
There may be some comments that developers add in there that are not, you know, that need to be sanitized, let's say, okay? Because they didn't expect anyone to ever see it outside of Microsoft, right? Like, nothing, like, terribly bad, but, you know, Microsoft doesn't want to be very compliant. And when we have it open source, there are certain things that we shouldn't say. So I'll give, like, a simple example where,
Stuff like master and slave was very classical computing terms. If you've ever done hard drives in the old days, you would have to set up one drive as a master drive and the other drive as a slave. It's just how things worked out. In modern days, who would have thought that that was a good term to use? These kind of things we'd have to now reconsider. Partial Project is still on master branch right now because it's just been hard to change it to main. These are the kind of things that we have to think about and spend a lot of time sanitizing that code base.
getting it ready. So that was another big effort. And then the other big effort was really when you have people developing for Windows, they only think that this will ever run on Windows. So when we actually had to make it work on Linux and Mac OS,
There are a lot of decisions that were made, made sense. And now we have to like, how do we make this work on and on Windows? Like the slashes is just an obvious example, right? Like on Windows, backslash is how you separate a path. And backslash is actually a scapegoat on non-Windows, right? So how do we make that work? Or on Windows, you always have a drive letter, right? ABCD, whatever the case may be on Linux or any Unix system that doesn't exist, right? So how do we figure that out? So there's a lot of changes that were made.
in the code to accommodate that. And some of those were actually very complex because a lot of the fundamental decisions were based around Windows-isms that would never work, right? So a lot of time was spent on just that as well to make it work across the platform.
How did you do that? Do you have like if-then branches for detection of the OS or is there like a translation layer? How is that actually implemented in PowerShell? That's a good question. So we actually have, I guess you could say three different things that we use. One is there are certain APIs that exist in Windows. So we do, there are some PMVokes that we do, which is like a platform invoke in .NET terms.
because the API doesn't exist in C Sharp. We have to call some underlying native API. In those cases, we actually have, there's a separate partial dash native repo project that does abstraction for those APIs, right? So there is a limit instead of native code that we still use
but we're not increasing that. We're just keeping it because it's needed. So that was one aspect is have that abstraction layer. We do have a bunch of if-defs where it doesn't make sense to compile a bunch of code that would fault or would just be never run on Linux, for example, because it's Windows specific, right? And we have the opposite for Linux as well.
But there are some cases where we actually have runtime decisions because it didn't make sense to FDEF it out. And so it was like, hey, if I'm running on Linux or if I'm running on macOS, I'm going to go down this other path. So that code may exist on Windows Linux and macOS, but it only gets run on a particular system. Those are like the, I think, minimally keeping the runtime stuff. Most of the time we do have large FDEFs because there's no reason to compile that code if it's never going to get used on a different operating system. Yeah, that makes sense.
Steve, are you able to talk at all about what the conversations in the room were like to convince people to let you open source PowerShell at Microsoft? Like, what were the pros that you gave? What were the cons they came back with? Yeah, I think the question was really, you know, if you open source PowerShell, does this put Microsoft as a competitive disadvantage or not? Right. And at that point in time, I mean, PowerShell was
used by pretty much everyone at Microsoft and Microsoft customers, right? Because if you're doing like office stuff, you would use PowerShell to manage Exchange and stuff like that. If you're using Windows, you'd be using PowerShell, whether you knew it or not, right? Although CMD was still available. So I think that the first decision point from a business point of view is like, do we lose anything by making it more broadly available? And the answer was seemingly no, right?
And then the question was like for Azure, for Azure to grow as a cloud offering, you know, I think that everyone recognized, hey, we got to target Linux users. I don't know if anyone knows, but at one point in time, it was called Windows Azure and they dropped the Windows from it because they want to just create the perception that you only do Windows. Like the underlying platform is running Windows at least at that point in time, right? So if you had a Linux VM, it's still running on a Linux running on Windows Hyper-V. So that was the other decision. All right, if we really want to kind of help grow Azure usage,
Does PowerShell help with that or not? And if so, then how do we get Linux users to even look at PowerShell, right? So I think the strategy early on was not to say,
Because I think me coming from a Unix background, you know, if you remember what I have from Boeing and stuff like that, you know, a lot of people have these classic religious wars in Unix land, right? Like KDE versus GNOME and things like that. And those are fun, but they're not also very productive. The learning for me there is like people have certain things they kind of stick with because they know it and they don't necessarily want to try something new. If it's better, that's fine. But it has to be like significantly better for them to really consider shifting, right? So value proposition, we're originally targeting people
We know that some Windows enterprises are starting to look at Linux. It makes sense for certain workloads. So how do we now help Windows customers manage Linux using technology they already know, which is PowerShell, and how do we get them to do that in Azure? So that's the business proposition. It's like, all right, if we can be successful here, we could potentially bring more customers who are classically on-premise, like Windows customers primarily initially,
to consider, if we're even looking at Linux at all, then how do we get them to look at Linux in Azure, right? So that was kind of like what we sold and the upper management agreed like, hey, this makes sense. We're not going to lose anything by doing this other than time and effort. And so over time, you know, we did a lot of this work. We targeted stuff like, you know, Azure PowerShell as a way to, you know, if you're comfortable with objects, you can use Azure PowerShell within PowerShell on Linux to manage your Azure assets effectively.
Early on, I don't know if everyone's aware of this, but before we actually made the public announcement about going cross-platform open source, we actually talked with VMware, Google Compute Cloud, and also AWS, and all of them were on board because they also had Windows commandlets, right? To get them to say, hey...
This is putting on my open source hat and not my Windows hat. It's like, hey, we want this also to be supported on our Windows. And they all came along for the ride. So our initial offering to Linux users is you can use PowerShell, learn PowerShell, you can manage different clouds. It's agnostic to that, and you get the benefit objects. So over time, we continue to grow that. Obviously, we are Microsoft employees, so we do try to think Azure first, but not Azure only.
And the RFC process you have means the roadmap is kind of open as well? So yes and no. So RFC meaning request for comments, right? It's like we wanted a little bit more formal process on both how the team works
that presents future work in terms of like technical design, also like the scenarios that we're trying to target and stuff like that, but also allow the community if they wanted to propose a feature that we kind of force them to think more through some of the implications of it rather than just open up an issue and complain why we haven't done it yet, right? So the RSC process we have, we created a while ago and it turned out, just to be clear, like a lot of stuff that we do in the open source, we're not inventing. We have no
that we're the first people breaking new ground here. Like we always try to look at other teams. We look at .NET, for example. They've done this a lot more. They're a much bigger team. And we try to learn from them and avoid pitfalls and stuff like that. But one of the things we learned from the RFC process that we had early on was it was too heavyweight.
It was heavy for the team to review this stuff. It was heavy for our community to write this stuff. So we actually have modified it, I think, over the last two years. And we're just starting with an issue or discussion on GitHub and say, hey, you have an idea. Post it there. Don't think about just what we call the rainbow, sunny, sunshine path where everything just works. Think about what happens when it doesn't work. How do you notify the user? What does that experience look like? And things like that. And then we also have now a concept of working groups.
which is almost at least half or more with the community members being members of the working groups right so now instead of having a single committee which i'm a member of that makes all these high level decisions we actually have working groups that are more specialized and say hey you're a working group for the engine here's something that came up for the community does it make sense right like is it consistent with what we want to do with power show from an engine perspective we also have one that's for a command list for example it's like all right is it following the best practices do
do we need to push back and say maybe it doesn't belong as a command that's part of PowerShell and should just be a separate module that someone publishes a gallery. We don't need the whole university part of one project. There's a whole reason we have the gallery so that people can publish things and people can pick and choose.
So that's kind of where we are today. And I think it works reasonably well. The thing to keep in mind is like, I don't have this huge team managing this pretty large project. So we kind of have to pick and choose our battles and always evolve and see how we can be efficient, effective, not just within the team, but also by the community. And I'll give you another example, right? It's like as my portfolio has grown and certain things have come up to take priority over just the pure partial project, I've had to kind of like had to shift resources and
But one of the things that happened as a side effect of that is that the community didn't see us as active on our repo as we used to be when the project first started. And it kind of get upset by that reasonably well. So one of the things we did is that we started Mondays on my team as community days. So for the OpenSSH project, the folks on that project will be focusing on community stuff. Community stuff meaning responding to issues, looking at community PRs, even like fixing issues and stuff like that.
there's two main benefits that I see from this model. One is everyone on the team knows, you know, we're going to dedicate Mondays for this. So everyone sees each other working on this stuff, right? And for the community, they have now a regular cadence. They can kind of expect to say, hey, you know, I opened this issue. You didn't respond for a long time. Like, what does that mean? It's like now at least they know there's a chance they can get a response on Mondays, right? So that's kind of how, and it's worked reasonably well. And that doesn't mean that team members can't look at community stuff throughout the rest of the week. It just means that there's a focus on Mondays
Obviously, there is other stuff that we do do that is not public, that eventually becomes public, but may not be open source. There's other stuff that we do within Azure, within Azure partners, that we're not going to disclose their feature because we're partnering with them. We're leaving it up to them. And so other days of the week take up this kind of work, right? So...
What does the future look like? Because PowerShell does a lot of things. You can use it on remote machines. There's this hosting API for embedding it. There's even config as code. Where does that all fit in? What's the future vision? Where are things going?
So this is where I'm going to plug. So if you go to aka.ms/psblog, one thing I do try to do as part of the public roadmap stuff, I do try to publish a blog post early on. And I think I did it in January this year. So that was good. Last year, I think it took longer, which is like just outlining our investments for the whole calendar year. Obviously, stuff I put in that blog is not a promise. It is stuff that we're trying to get to and not everything gets done.
for various reasons. And some stuff turns out to be much more complicated, whatever. So anyways, I do have that out there, but I'm going to just summarize it. Like, you know, one of the things that we've been looking at for PowerShell is the recognition that not everyone is going to write a commandlet, right? Which is like the quote unquote native way to interact with PowerShell is to write a commandlet. So you get like consistency, discovery building, all this good stuff. But there's a lot of native commands, what we call native commands, like kubectl, Docker, you know, stuff that people use, Git as an example, or guess what?
Git is not, I mean, some of these projects do have teams of community building commands around them. But I would also say there's a lot of tools that no one's ever going to do it because it doesn't make sense or they're so actively being developed that you're always going to be playing catch up, right? To do a commandlet equivalent that is underlying code is just going to call the native command anyways.
So a lot of stuff we've been looking at is how do we just make PowerShell a great shell for non-commandlets, right? I think the commandlet case is quote-unquote solved. So the question is, how do we make sure that for anyone just using it as just a shell, it just works the way they expect? Like if they go to Stack Overflow and they find an example kubectl command, if they paste in a PowerShell, is it going to work or not? And there are some cases where it's not because of the way the parsing works differently, and we're not going to change that because then it breaks other scenarios.
But some of the stuff we're looking at now is, for example, how do we make it easier for non-PowerShell commands to register argument completers? So Azure CLI is an example. It's also owned by Microsoft. But there's actually a large number of folks who want to use Azure CLI as their choice of interactive use of Azure in PowerShell.
Whereas a lot of Azure PowerShell users use it for writing scripts and doing automation. Although it seems like it's one or the other, a lot of people actually use both. One of the things that they've been working on is having this argument completer for Azure CLI within the PowerShell. One of the questions they asked is, how do we register this thing? We've been working on some of that stuff.
Another example that I've mentioned in the blog post is there's a lot of commands, including like Docker, Qt Control, things like that, that actually output JSON or have a way to output JSON. And again, JSON isn't a live object, but it is structured data. So in PowerShell today, if you are...
intermediate user, you would know, like, I'm going to run this complex command, it's going to output JSON. I can pipe this to convert from JSON and I can get an object that looks like another object in PowerShell, right? So one of the things we're looking at is, you know, how can we kind of, in some use cases where it makes sense, how do we automate that process? Like if we know the command's going to output JSON, does it make sense to just convert it automatically on behalf of the user so now they don't have to worry about inserting convert from JSON in the middle, stuff like that. So I'm hoping to have a RFC spec
It's not going to hit February because we're almost done with that. Maybe next month to kind of outline exactly how that is intended to work, right? And the idea there is really also not to have ideas that are proprietary to PowerShell, but stuff that could be adopted by other shells, right? So that we can actually encourage the ecosystem all up to say, guess what? JSON makes sense in the shell. I think that battle has already been fought and won, right? I'm not a huge fan of JSON, but I can accept that
There's tooling for JSON for every language. Let's just agree to just stick with that. And we can make the whole ecosystem better by having tools emit it and having shells be able to understand it, right? So that's one of the big areas that we're looking at in the 7.4 release, which is this calendar years, how to make that more natural so that you can have commands that can participate in the object aspect and not just text.
There's a lot of other things as well, but that's one of the big things that I've been looking at. Actually, I'll mention the other big thing is everyone probably saw how Bing and ChatGPT has integration. So definitely AI is on top of everyone's mind. And that is something that we've actually been looking at for a while. So I'm not sure if everyone is aware, but we had, even before ChatGPT, even before some other popular ones that came out like Stable Diffusion and stuff like that, we were looking at AI as
several years ago before things were ready. And we actually have a plugin model. So PS Reliant is a module that we use as the way to present the interactive experience for PowerShell users. And so one thing that we did back in, I think, 7.1, which would have been probably like, what, two, three years ago, is we added a predictor plugin. So someone could actually build a predictor in C Sharp and be able to present that through PS Reliant to the user, right? And the big partner that we worked with there is Azure PowerShell because they actually had
a team that did neural nets, had model training and stuff like that. The idea there was as you're typing, and this is similar to what you might see in GitHub Copilot, as you're typing an AZ PowerShell command, and we know that you're trying to create a VM, for example, then we can predict to fill in some of the parameter values. Maybe we know that you're on the East Coast, East US, whatever, or maybe previous lines you had defined a variable called XYZ and you want to use that. That already existed and it was very limited.
But now we're working with some other partners to say, "Hey, what are some other interesting scenarios in the console for AI to help people be more productive?" And I know that the community... And we had a demo of this last Poshel community call, which we have as the third Thursday every month. Doug Fink is one of the community members who presented on this. So basically he wrote a Poshel module that actually works directly with ChatGPT. So within the Poshel console, you can ask it native language questions.
It will return results and you can accept or not accept it. And by the way, I'll just say anything that comes out of AI today, you should review it before you just accept and hit enter because who knows what's going to happen. This also goes for Copilot. Copilot is going to predict code. You should always review what's coming out and not just check it in. But these are nice accelerators, right? It can tell you stuff that you didn't think about or weren't aware of and you can decide how to best use it for yourself. So these are some of the scenarios we're looking at.
is like how best to not just integrate AI, but how do we enable partners and other third parties to kind of participate and do, so we're making these plugins stuff public and then hopefully other companies and stuff build plugins that work within the PowerShell ecosystem. Before we wrap up then, I have two lightning questions for you. Sure. What interesting tools, could be a dev tool, are you playing around with at the moment?
Off the top of my head, I have to say, going back to chat GPT, I've been doing experiments with that, again, just to see how it can be useful and how to integrate it so there's a more natural experience. Makes sense. And then secondly, what's your current tech setup? What hardware and software do you use every day? So my primary development machine is actually a M1 MacBook Pro.
Unfortunately, Microsoft doesn't have the budget to upgrade me to the new M2 system. And one of the reasons I have a MacBook... Actually, my prior development system was primarily a MacBook Pro x64 machine. And when we first started the open source project in cross-platform,
You can get Linux VMs and containers, but you couldn't virtualize Mac OS, at least not legally. So for us to kind of cover that testing, it made sense for some team members to have MacBooks because now we could actually develop on it. We could test it and stuff like that. So I ended up having one of those books and I've been using it. And for the record, I still have a Windows Surface Pro X I also use at home. So I can do development in both those things. And I can use Windows subsystem for Linux to do the Linux stuff, right? So I do have all these machines available for myself, which I benefit from, but...
My main machine is a MacBook Pro that I do most of my work on, and then I just RDP into my Windows machine when I need to do Windows-specific stuff. And I use Visual Studio Code on both systems as my primary IDE. Cool. Excellent. Well, unfortunately, that's all we've got time for. Thanks for joining us, Steve.
Absolutely. This has been fun. I've done these kind of things for other PowerShell-specific stuff, so I think it's very interesting for me to branch out and not have a very PowerShell-specific focus discussion. Although my discussion was PowerShell-focused, but with a group that isn't.
Thanks for listening to the Console DevTools podcast. Please let us know what you think on Twitter. I'm at David Mitton, and you can follow at console.dev. Don't forget to subscribe and rate us in your podcast player. And if you're playing around with or building any interesting DevTools, please get in touch. Our email's in the show notes. See you next time.