Welcome back to the Free Code Camp Podcast, your source for raw, unedited interviews with developers. Today, we're talking with Ty Groot. He's a backend software engineer, and he runs an open source project used by companies like Google. For the first half of the interview, we talk about backend programming languages like Go, Rust, and TypeScript. Then Ty shares tips for learning backend development and running your own developer consultancy. Ty Groot, welcome to the Free Code Camp Podcast. Hi.
Thanks for having me. Yeah, and it's a sunny day there in Northern California, where you're based, near the software development mecca, somewhere between San Jose and San Francisco. And we're thrilled to have you here on the podcast. Yeah, thanks for having me. It's exciting. Yeah, well...
I want to start with a question. You're a backend engineer primarily. I mean, you can do everything, but like that's where you've chosen to specialize for the past few years. And a lot of people when they're doing backend development, they may be thinking, Oh, I'll just use Python. I mean, you can use Python for everything, right? Uh, why not just use Python for everything? Yeah.
Yeah, that's a great question. So I did just publish a course that talks about the relative strengths of Go and TypeScript versus Rust. Python's another one of those options, right? Python was actually one of my first backend languages, and it was...
It was a good time. I like Python better now than I did back when I used it. But yeah, I mean, there's plenty of reasons. Let's get into it. So Python is fine as like a glue language, I think. I would recommend it to a lot of people, depending on the situation. Yeah.
tends to struggle with loops. So, um, people that are learning Python for the first time or people that are, um, you know, evaluating Python is like a main language. Um,
may not realize just how much slower Python is compared to a compiled language. And sometimes that trade-off is fine, right? Sometimes getting the ability to run through something like a Jupyter notebook kernel and being able to execute code top to bottom and then pause it and then have the ability to fork your state and replace everything as is and make small little tweaks and changes and keep everything kind of frozen in memory and work off of a snapshot of the heap is
Sometimes that can be way more important than speed. So an example where that comes into mind is machine learning, right? When you're doing machine learning and you're testing something out for the first time and maybe it took you two or three weeks of crunching numbers to get to the point where you are now and you...
need to do an experiment. Um, and in Python, you can literally like, you know, set a break point and, and pause the execution of the entire interpreter. Um, you can dump everything to the disc and, and restore later using something like, uh, it's a pickle, pickle and, and, and shelve, I think are the, are the two that most people use. Um,
You can do all sorts of really nice things. The developer ergonomics on Python have gotten a lot better than they have been back when I was using it. Things like type annotations. Python's come a long way. I started using Python, I believe, back on Python 2.7.
Um, and I only ever ended up moving away from Python, uh, because the roles that I had were primarily, uh, go roles. Um, so go, go was like a, the new cat on the block. I've been hearing about it for a while. Um, I mean, relatively speaking, it wasn't, it wasn't that new, but, um, you know, it's,
Yeah, Go is a programming language, I think, developed by Google for high concurrency and just being really performant, is my understanding. That was the primary thing that they optimized for. Is that accurate? Yeah, yeah. So Go is a compiled language. So the runtime overhead is relatively low. Now, it still is multi-threaded and...
Um, it, it has a garbage collector. Um, and, and when I say multi-threaded, there's, there's both threads and something that they have internally called go routines, which are more like green threads. So your, your, your program may only be quote unquote single threaded, um, but you'll still have like a secondary operating system thread for, for running garbage collection and other things like that. Um, but, uh, yeah, so go, go, um,
At least, you know, the official statement is that Go was initially developed while a C++ code base was compiling. Like, it was 100% of the Go original, you know, 1.0 or the early tag release of Go was developed, like, while the developers at Google were waiting on downtime for a C++ code base to compile. I think it was Chromium or something. Okay, so let's put that in context. Like...
So for people that have just been working with like high level scripting languages and just taking for granted that, you know, they don't need to have like a compile step before they can actually see whether their code works. Right. They could just run it real quick. You can see there's like the X, I think it's like an XKCD where they're like,
playing swords on the school chairs. Codes compiling. Like their boss comes by, why are you goofing off? Oh, the code's compiling. We can't do anything. I guess I'll just have to sword fight because the code's compiling. We're blocked. But so in the time that their code base was compiling, they were like, okay, let's switch contexts and let's go develop an entire different programming language because we have time while this slow compilation process unfolds. Yeah. That's what you're telling me.
Yeah, so let's talk a little bit more about that if you want. We can talk about the just-in-time compilation. Yeah. So all code is compiled before it is run, right? So Python being an interpreted language...
There's no traditional compile step where you take Python code and turn it into an executable that .exe or nothing runnable that you'd install on macOS or Linux as well. But every line is being compiled, and then there's some caching and other dynamics you can get into. But every line is being compiled...
basically as it's being run, like compile and then run, compile and run, compile and run. So it seems almost faster to just run Python...
you know, script.py and have it just execute almost immediately instead of having a compile step ahead of time. But that slowness of the compilation is happening line by line as it's running. So in some sense, you could kind of say like for a small enough Python script, it's actually faster to run Python than to build like a massive Go program or C++ program. Because funny enough, you're kind of taking advantage of multi-threading almost where you've got like
a core that's busy running the Python interpreter, which is the, the just in time compilation step. And then another one that's actually running whatever C plus plus modules that, that Python is, is gluing together. Cause that's, that's another big point is that most of the time when you have you like useful Python, you're not actually just running Python. You're running some sort of pre compiled, you know, C plus plus object that's already been built for you.
Whether that's like a web browser controller using something like, you know, I think it's Puppeteer or, you know, Beautiful Soup, all these different libraries, especially anything that involves encryption or TLS, anything that's hitting a web server, you're always going to be running some sort of, you know, pre-compiled C++ object under the hood.
Yeah. Support for this podcast comes from a grant from Wix Studio. Wix Studio provides developers tools to rapidly build websites with everything out of the box, then extend, replace, and break boundaries with code. Learn more at wixstudio.com. Support also comes from the 11,189 kind folks who support Free Code Camp through a monthly donation. You can join these chill human beings and help our charity's mission by going to donate.freecodecamp.org.
And it's worth noting that like Python itself and most scripting languages are originally like you go down far enough at C++ and below that it's, you know, C. Yep. And so what you're saying, if I can recap, this is actually new to me. I didn't realize this. So with an interpreting programming language like Python...
It's essentially going in and compiling line by line so that you can skip that big upfront compilation step. It's just in time compiling. So if you have a small Python script that's only a few lines long, well, only those lines need to get compiled in because you're able to take advantage of the multi-threaded nature of some of the underlying layers of abstraction.
it can actually be faster for a small Python program. The fact that it's not small. Yeah. Yeah. But, but the moment that your Python project gets bigger than like maybe a few little files, uh, a few little scripts that you're running, you do start to pay that penalty for having just the time, uh, compilation as opposed to, you know, having a compile step that you get something like go or something like rust, uh,
Yeah, and you'll see something very similar to that in JavaScript as well. So JavaScript is another interpreted language, famously so, right? It's the one that everyone uses for the browser. We've got WASM and a couple other things that are kind of emerging. Yeah, WebAssembly. Yeah, WebAssembly, sorry. Sorry, it's okay. I didn't mean to interrupt you. I just love to share, you know, whenever somebody hears a term or an acronym, they may be familiar with WebAssembly and not know what WASM is.
Yeah. Um, so I mean, web assemblies emerging, but even web assembly requires like a JavaScript shim to get running. Like you have to inject it and start the runtime and pass data to and from the WASM or web assembly runtime. Um, so we're never going to get away from, from JavaScript there, but, um,
So, yeah, I mean, JavaScript being another interpreted language has something similar going on, right? So V8 is the most popular JavaScript engine developed by Google. Yeah, it's the Google Chrome kind of... Yeah. How would you describe V8? Yeah, it's the JavaScript engine and runtime that most people will see it bundled into Chrome, but it's also what powers Node.js. Right. And a couple other things.
So JavaScript has a couple of different modes of running, I guess. And so when you start interpreting a large JavaScript code base, it will just get basically JIT compiled or just-in-time compiled, just like Python, like top-bottom. It doesn't have any optimization passes or anything like that. But V8 is intelligent enough to notice when certain things
functions or certain loops start to get called, uh, repeatedly, um, very quickly next to each other, like high repetition. Um, and it's, it realizes, uh, there's, there's some heuristics built into V8 where it will notice like, Hey, I've, I've JIT compiled this like three times now, five times now, 15 times now. Um,
And it will actually cache the compilation output of those lines. So when it finds those lines of JavaScript in the future, it will just jump directly to the pre-compiled set. So,
Going back to what I was talking about with Python, it's one of those trade-offs where for a short enough program or a program that you're only going to use once or twice, doing something that's interpreted or just-in-time compilation can actually be faster than sitting there and pre-compiling the entire thing. And the proof of that is every time you load a web page, the interactivity, if the web page is built correctly, the interactivity loads basically instantly. Right.
and it would take much longer if you were waiting for a pre-compilation step. Yeah. Okay, so I'm going to recap some of what you said. First of all, JIT compile just in time, just like just-in-time production, if you're familiar with manufacturing or something like that. You order a part, and they custom make it
Just as a one-time part as opposed to having like an entire production run, create hundreds of parts, hundreds of copies of that part. It's like here's that one part that you needed. So that's where the term just-in-time was imported for software development is from manufacturing. So just-in-time compilation.
V8, this JavaScript engine, again, maintained by Google and I think used in Chrome and, of course, notably used in Node.js, which the FreeCodeCamp curriculum uses as the primary way to do full-stack JavaScript. And, of course, the creator of Node.js also has
or Deno, which is basically like kind of a SQL project that addresses some of the shortcomings of Node. But if you've heard of either of these projects, what's really powering them is V8. And the true power of V8 is, as you said, the ability to notice repetition within a code base and just cache the output of that.
that function or of that loop so that it doesn't need to keep running it over and over. And that's how it can be so high performance, even with an interpreted language like JavaScript. And Node.js is famously extremely high performance, which is surprising considering that it is based on an interpreted programming language. So let's talk a little bit about
and rust. And we can talk about TypeScript too, which is basically just JavaScript with types. Um, that, that is a little, it's, it's type safe JavaScript. So you don't, it just dramatically reduces the likelihood of bugs. Uh,
uh, type based bugs. My understanding are extremely common. Uh, and it's one of the most common kind of things that can go wrong. So if somebody threw up their hands and said, JavaScript should just have static types, what the heck? And so they created TypeScript. But if you know JavaScript, you can very quickly learn TypeScript. They're, they're practically the same thing. It's just, my understanding is it's just a little bit of additional functionality built on top of JavaScript. Is that, does that job with what your understanding is? Yeah. Yeah. So, um,
There's kind of like a straight line that kind of goes from like your Python and your JavaScript languages through your Go language through to Rust. And it's kind of like a timeline in terms of how easy is it to write versus how performant is it going to be and also how long is it going to take to compile? Yeah.
So one of the kind of the through lines you'll notice is that the more performant a language is at runtime, usually the higher cost it has at compilation time. So something like a C or a Rust are going to be around the same. C++, Rust, C++, C and Rust are all going to be about the same. Now C++...
has just about the worst compilation time for a programming language that can build a program of sufficient difficulty or sufficient complexity. And that's because C++ really likes to import a lot of header files almost recursively, like over and over and over again. So
Really, the problem with C++ compilation time there is not the code itself. It's more the header language, like the meta language, the macros that you can use up above that change the behavior of your code based on different compiler targets.
Um, because you'll have library a that imports library B that imports library C that also imports library a, and you have like all these different, uh, like import cycles and everything. And see the, the, the compiler needs to be able to handle that just because of so many years of, of legacy. Um, and so that was part of, you know, why go came into being, um,
was again, the, the compilation time of C++ is so slow that they're like, we, we need something that's, that's faster to compile at least than this. And while we're building that, why don't we try to come up with a language specification that is going to give us a language that's really easy to write. And it's really fast to onboard new developers too. While maintaining that we need to make it high performance at runtime. So we can't have another interpreted language.
Um, so go kind of fits much closer to rust than to type script on that, that through line. Um, and certainly much closer than, than Python. Um,
So starting back over at TypeScript and JavaScript, like I said earlier, being interpreted languages, there's always going to be some sort of slowdown to that compared to a precompiled language. Now, like I said, there's a price you pay in compilation. That's a one-time cost that you pay for execution. So if you're going to execute this program twice, three times, four times, you're going
Is stuff running in production? Is it running? Are we now talking? It's not just a shell script that runs top to bottom and then exits. Maybe it's a service that's running long-term. Well, keep in mind that that cost is going to be accruing over the runtime of the program, not just a one-shot execution through a func main or the main function. So a lot of the gains that you get are actually going to be
towards more server driven applications than for command line tools. There's, there's a, there's a huge boon of, of command line tools that are written in Python and TypeScript and they're all perfectly fine. Right. Um, they'll, they'll run through and as a developer, you might be running it a couple dozen times a day. I think there's like, uh, some command line utilities for, for accessing, uh, AWS resources. I think the AWS CLI, I think that was originally written in JavaScript. It might be in another language now, but, uh,
Yeah, there's, there's a lot of, a lot of command line utility programs or CLIs will all be, I'll be written in Python and JavaScript just because the speed of execution isn't anywhere near as important as the developer's ability to make quick and easy changes to the code, um, or, or to test things using the, like, like I said earlier with like the Jupyter notebook kernels, you can pause execution and do some, some more advanced debugging at runtime. Um,
Types added something really interesting to both Python and JavaScript as interpreted languages because neither of them actually are used. What do I mean by that? So in a language like C++ or Go...
you've got, or especially Rust, you've got a type associated with every piece of data. And so every piece of data that you have has a shape to it that's part of that type. So if it's an int, like an integer type, you know exactly how many bits in memory or how many bytes in memory are going to be set aside for that type. And if you define a struct in Python, it would be a class struct,
If you define a struct in Go or in Rust, you're going to be setting aside exactly the amount of space required for all the different properties within that class or within that struct. So if you've got two different numbers and then a string, like, you know, the size of that struct, you know, how big it's going to be, you know, what it requires to fit that into memory, especially when you're fitting an array of them into memory or a slice of them in Go in memory.
So if you've got a bunch of them, you know exactly how to lay out your memory ahead of time. Now, in JavaScript and Python, you can't really do that, right? Yeah, because of the duct typing. So, you know, like a binary takes up the same amount of space and memory as, you know, I don't know what the large string is. Yeah. Yeah.
I mean, what would be the biggest... I just want to make sure I'm understanding you correctly because I didn't realize that it was this inefficient. If I'm getting this correct, that it was this memory inefficient having a duck-typed language. And the reason it's called duck-type is if it looks like a duck and quacks like a duck, it's a duck. Then it's probably a duck. Basically, like...
dynamic typing where you just say var equals xyz and then it stores whatever that is and it doesn't care what type it is, whether it's a Boolean or whether it's a string or an int. It just
sets that aside and then it can be changed. It can change from one data type to another when you reassign it later. That's something that you can do with dynamic typing that you can't do with static typing. And that's what I said leads to a lot of type errors, but some creative programmers might be able to use that to do things efficiently, just to have this generic container that they can put whatever data they need into for the time being and then they can overwrite it later. But that comes at a cost. Yeah.
Uh, in terms of memory is what you're saying. Yeah. Yeah. Uh, to your, to your point, DHH loves that about Ruby. Um, he said, yeah, there's, there's some projects that are trying to add types to Ruby and don't, don't, you've got, you've got a good thing. Don't ruin it. He loves his dynamic typing. Um, but yeah, I mean, focusing on JavaScript for a moment, um, essentially every variable you ever create in JavaScript is going to be a pointer, right?
It doesn't feel like a pointer, but it is. And it's a pointer to the piece of memory that's been allocated. But it's not just that. It's going to be like a struct. So you've got inside of your memory, you've got a pointer to the memory. And then you've got tags on it for when it was last used for the mark and sweep garbage collector. Because it's not just dynamic. It's also garbage collected. And then you've got...
the actual value or potentially a pointer to the value if it's another dynamic, if the value inside is also dynamically sized like an object or an array. So you end up almost like
You end up ballooning the memory that's required. And there's been an optimization made in the... I don't know how many years ago for small integers. So small integers and booleans, they use SMEs, which is like a small integer, SME. And so some variables don't actually take up all the...
memory that's required by like an object, um, where you really get into trouble in JavaScript is when you start getting into functional programming and you're doing like dot map dot filter dot map dot filter. And you're constantly like every time you're calling a function off of an object in JavaScript, you're creating more memory. Um,
And that's something that you do all the time in JavaScript because functional programming is the ideal way to use JavaScript. Yeah, it's considered good practice to create tons and tons and tons of memory, especially memory that you're only ever going to use once in the middle of a function call chain and then never again. And that's kind of wild, actually. But, you know, the V8 team is really good at optimizing for that. Yeah.
Basically, the entire... I can almost feel a sigh of desperation from the V8 team every time a new...
like a new, uh, programming, like kind of, um, fad comes on, like on the JavaScript community where it's like, okay, well, um, now we're going to do, we want to, we want to chain a bunch of operators. Now we want, now we want to, I, there's, there's a proposal that came through for adding the types from the TypeScript type system into the language itself. Right. And it's just like, man, I, I've,
I'm very impressed by how well they've managed. I also feel really sorry for the developers for all of the different things they have to keep in their head in order to not just maintain specification compliance, but also to be performant enough that everyone can use all these crazy patterns and idioms like in a mobile browser on a cell phone. Yeah. Right. It's actually like V8 is probably one of the most impressive applications
projects like next to like the Linux kernel and, and, you know, and, and Minix and a couple other, a couple other big guys, right? Like it is, it's certainly up there. Yeah. Well, I'd like to talk more about maintaining open source because that is one of the things you do. You're an open source maintainer and we're going to get to that in a moment, but I do want to just kind of like cap off our discussion of,
these different programming languages, specifically Rust, Go, and TypeScript. And maybe you could give an ideal kind of use case for each of these ones and when it would make sense to use each. So I just want to kind of step back and give you a second to think of an example project where like, oh, this is definitely a place where Rust would shine. This is a place where Go would shine. This is a place where TypeScript would shine. Just so everybody listening can walk away with kind of just like some kind of general vibe.
for um where they can be when they're trying to decide which which language to use for a new project yeah um typescript is a good language for um large teams that are potentially not all in the time all in the same time zone not all working you know in the same office um the the
The speed of development of JavaScript plus the added safety of types, which are, you know, will help you with autocomplete in your, in your code editor and help you with making sure that you're not introducing new bugs and that you're using function calls and signatures correctly. That will, uh,
Yeah.
Um, so JavaScript just being a really easy language to write code in, um, comparatively to go and rust, uh, really the, the power of TypeScript comes from JavaScript being easy, but then you, you tack on just a tiny little bit of extra difficulty on, on top to maintain safety so that you don't run into runtime bugs. Um,
That's kind of where I would say JavaScript is really good for large teams, or especially it's really good for teams that are writing a full-stack application that's running both on the back end and the front end. So something like a web server with a browser client. So for example, in React, the Next.js code base is written in TypeScript.
Um, and the, the, the reason why that's beneficial is, um, you're writing a react app using next JS and you need to figure out how does this, how does this router actually work? I mean, I'm, I'm importing next slash router or next slash navigation. What, what, what is that code actually doing under the hood? You can go to definition and you can jump through and that's using the types and the import, the imports and the type system. You can jump into the actual definition of that function and you can read it and it's all in one language.
So that's really helpful in terms of JavaScript and TypeScript. And I'm going to skip over Go for a second and go all the way to Rust. And before we do, I have a very quick question. Would you consider creating a new project just in vanilla JavaScript or would every JavaScript-related project you create going forward involve TypeScript? Sure.
If it was small enough, I might use JavaScript. So I have a Chrome extension or one or two Chrome extensions that I published in the past. And there are less than 80 lines of code. And in fact, in the most recent version, I think using the .map and .filter that I was harping against 10 minutes ago, I got them down to about 13 lines of code or eight lines of code. Wow. I'm not going to use TypeScript for something that small. There's no like...
the, the build step would take, it would, it would take longer for me to configure the build step than it would for me to just delete and rewrite the code. So in that case, uh, I would use vanilla JavaScript, but for any sufficiently large project, um, especially any project that's running on the front end, I'm almost always going to be using TypeScript. Now, am I using TypeScript, you know, directly, or am I using, um, what's, what's the other one, um, where you can define your types in, in the, in the comments, um,
I'm forgetting the name right now. I can't remember the name either. Yeah, I can't remember the name. Oh, I'm sorry. Go ahead. Go ahead. I was going to say, I do need to do a quick disclosure. The FreeCodeCamp used to use just vanilla JavaScript, and we've since migrated fully to TypeScript. And the FreeCodeCamp core curriculum does have a section on TypeScript. And we recommend learning it after you learn JavaScript and then just never going back. Yeah.
The word I was forgetting was JSDoc. So there's a version of TypeScript, and that's going to make some people mad that I described it that way. But there's a version of TypeScript. It's supported by TypeScript that you can define your types in the comments above and around a function instead of inline, which prevents you from having to do the – you don't have to strip the types out before you run it.
Um, which is, you know, because JavaScript, the JavaScript, all the JavaScript runtime engines don't actually support TypeScript. They only support JavaScript. So all the types have to be stripped away and basically just like delete the strings from the, the colons and everything. Like you can, you can basically semantically are using like a, like a grammar. You can remove all the different types and then just run the code as is. Um,
So, yeah, so skipping over Go and going from TypeScript to Rust, I recommend Rust for a couple of different reasons. So...
When you have something that is of high criticality in nature, so we're talking like a space shuttle launch, we're talking automotive, like, you know, safety cars don't seem that serious until you realize like, or yeah, physical human safety problems, hundreds of millions of dollars in the case of sending up a satellite or something like that. Yeah, yeah, exactly. And embedded as well. So rust and go, you're not really with,
with a small exception for tiny go, I'll give tiny go a shout out later. Um, with a small exception for tiny go, you can't really run JavaScript or go, um, on embedded hardware and rust has support for that. Um, the rust ecosystem is very interesting because, um, if I were to draw a parallel to like kernels, like operating system kernels, um, that might be a little too in the weeds, but, um,
Mac OS famously has a microkernel architecture where a lot of the different things that were considered core system utilities, like when windows was first released and, and, um, well, other systems, uh, Unix and Linux, um, they actually get moved out into like system binaries that aren't running in kernel space. They're running in user space. And that's kind of how I view rust a little bit. The core, uh,
runtime for Rust is extremely small. There's almost nothing that you, there's almost, I'm going to get in trouble so many times. There's almost nothing you can do with just the standard library in Rust. Even for just like async operations, you need to pull in a library. So you can pull in, and I think the most famous one is called Tokyo.
Um, and it's like, if you want async functionality and rust and you, well, and you actually want it to work, you've got to pull in a third party package, right? So, so rust has these things called crates, which is their version of a package. And, um,
There are a lot of crates. And if you're building, if you're, I believe in the, in the course that I put together, we, we built like a command line application that checks, uh, weather API and, and pulls like uses your location and, and pulls down the current temperature and humidity and a couple of their, you know, weather, weather related, uh, statistics, uh,
I think I had to pull in like seven different crates because there's no JSON parsing built in. Like just things that you take for granted, like JavaScript's got a way to parse JSON. It's JavaScript object notation. Go's got encoding slash JSON, which is built into the standard library with Rust. You need to pull in a package. And so that's a double-edged sword because you're not going to have this large
um, overhead from like a standard library that gets built into every rust binary. Um,
But you do have to pull down and depend on a lot of third-party libraries. And that feels really weird as a Go developer coming from the Go standpoint. Having to rely on a third-party library feels a little bit weird because in Go, a lot of these core things are built into the standard library and they feel almost like blessed by the Go developers, right? So it's...
There are third-party JSON encoders and decoders in Go. They aren't part of the standard library. There is a standard library decoder, and that's considered the canonical way to encode and decode JSON. Now, I think Uber has an open-source project for encoding and decoding JSON very quickly.
So I, I believe there's still completely, I believe there's still standards compliant, but like the code that they wrote is doing some weird hacks and it's not considered idiomatic. Um, so people will typically end up using the standard library version, even knowing that it's a little bit slower. Um, at least right now, that's another thing about go is that, um, the go team basically has this, I don't want to call it a guarantee, but almost like this un,
And it has been spoken. So I can't, I can't call it a guarantee and I can't call it an unspoken promise. It's not really a promise, but every new go release that comes out is usually slightly faster than the one right before it. Um, and if there's some sort of trade off, um, they'll usually trade like using a little bit of extra memory for higher speed. Um, because, um,
Generally, people care about their speed. When you look at a data center or a cloud, Go is very data center and server-driven. Data centers are caring more about their compute costs than their memory costs in a lot of cases. Memory has gotten very cheap. Compute has gotten more and more expensive, especially as we're competing with the NVIDIAs and the open AIs of the world.
I know that those are primarily GPU companies, but they're using up the same fabrication and the same fabrication capabilities. Yeah, so just to be clear, there are so many...
Silicon kind of foundries where they actually print the chips, essentially. And if most of those are being used to print like ASICs and GPUs and stuff like that, it does ultimately reduce the total amount of compute being introduced into all these data centers around the world that is not dedicated toward massively parallel operations like...
Like powering an LLM and doing inference and stuff like that or training or inference. So what you're saying is the Go team will usually just say, oh, memory is not the bottleneck here. It's actual performance in terms of how quickly an actual program gets executed. Am I accurately...
Yeah, and I hope, I mean, that's what I said. I hope I'm not mischaracterizing the Goals of the Go team here. I think, and I want to say, they've been really good about memory as well. It's just that when there is a very clear optimization win for compute, compute will usually win over memory is what I've noticed, at least. And I could be incorrect on that, but that's what I've noticed over time.
And so then that's like an organizational philosophy. Like they always want go to be getting faster naturally as they add more features, it is going to kind of slow us down. So they have to figure out ways to like further optimize what they have to compensate for the additional bulk that they're adding.
Kind of like if you're trying to run a really long race and you have to keep adding additional limbs or something like muscle mass, you need to be sure that you're able to still move that muscle mass, even though you're bigger, at least the same rate as you were doing before, ideally faster. So maybe you have to compensate. Or maybe a rocket would be a better analogy. Like you're adding additional stuff to the – like you add more fuel to the rocket, but that also increases the weight, which means you have to add even more fuel, which adds even more weight.
So you have to make sure that the fuel you're adding is actually... The rocket's still able to escape. Yeah, yeah. Exactly. I mean, Go binaries are very large compared to binaries in other languages because that standard library is so big.
Because every time you build a Go binary, even if you're writing a very simple command line application, single threaded, top to bottom, 20 lines of code, you're still bundling the garbage collector. You're still bundling the runtime. There's like this fixed cost overhead that you have building with Go. So to wrap up why I think someone might pick Rust over Go or over JavaScript, I would say
Again, the biggest reason I would recommend it would be for something that's human safety critical or just safety critical, I suppose. The other good use for, or embedded, the other good use for Rust is when you're doing a rewrite. So I personally will probably, well, at least I'll try, will never pick Rust as the language that I write a program in for the first time. Because a lot of the time when you're writing code in Rust,
any language, you are learning new things about the problem space as you're writing the code. And it becomes kind of like an evolution, I guess, of your, of your code base over time, as you are meeting your initial requirements and then realizing some of your initial requirements were wrong and throwing them away and coming up with new requirements, or maybe you realize that some, two of your requirements are in conflict and you have to decide which one wins out. Um, so there's, you
Code is, is, is rarely, if ever written exactly as initially designed, there's almost always going to be some sort of change. Um, and it doesn't matter how good your specification is. Um,
Unless your specification is the code, because we have a word for a specification that is exactly, you know, word for word exactly what you want. It's called code. Yes. And people misunderstand this all the time. I'm just going to take a moment to get on my soapbox. People think like, oh, I can just tell GPT what I need and it's going to write the code. Like if you tell GPT with the sufficient specificity, you are essentially writing that code yourself.
You know, like, and people who don't understand, like a lot of non-technical people don't understand that, like, if you want to sufficiently articulate what... I mean, code is essentially just the tersest way of articulating what should happen. It's communicating to the computer. So, sorry, I didn't mean to take you off, but I just want to clarify. No, no, that's exactly where I was headed. Yeah, exactly where I was headed on that. And so we have...
a history with all these different languages of putting all of the behavior into, well, or at least most of the behavior into the code itself. Like the, the, um, like,
top-down control flow in the code. So we have... Let me rephrase that. We have a history of having a problem that we need to solve and then encoding that top-down, you know, left-to-right, top-to-bottom into a code base in terms of, like, statements that execute. And then...
There's exceptions to that with things like Lisp and other languages that aren't top-down statements. They're whatever. But you get the point. You can still kind of read it left to right or right to left. There's exceptions to everything. But with Rust, it's kind of coming along and saying, okay, we're going to actually encode a significant chunk of the behavior of this code base into the type system. And...
When you have your behavior predefined, that's a really powerful thing because you know you can't do X or Y with this struct and you should never be able to. Well, the compiler will now prevent you from being able to do that. Yeah.
You can, if, if for example, you've got a piece of memory let's say, for example, you're writing a password manager and you want to make sure that this piece of memory is never accessed more than once. That's something that you can encode into the type system and it will never be accessed more than once. The compiler won't let you build a program that tries to access it more than once.
Um, and then your requirements change and you realize, oh, well actually we need to require it once to display it. And then another time to copy it to the user's clipboard. And now you have a decision. Are you going to rewrite your type? And it's going to affect all of the other, and, and, and rust, they'll call them traits, right? Not, not types. So sorry if I'm offending, uh, rust developers out there, but, um, you, you have a decision. Am I going to rewrite my, my trait or rewrite my type? Or am I going to,
Make an exception and create a brand new type just for this one use case and apply it just to this, this data type. So you end up with a lot of churn in a rust code base when it is your first, your first right through of the code base. And oftentimes it means like re-architecting or rewriting all of your code as opposed to a small piece of it.
If you write the code base through and it's functional in another language first, you already have a clear roadmap and a clear path on what it is you have to do. A subsequent rewrite in Rust can almost always give you benefits in terms of speed and clarity and even cut down on verbosity a little bit. Okay.
So I would recommend Rust for rewrites as well. Okay, so I just want to step back real quick and share. So first of all, I'm learning a tremendous amount. This has been awesome. So pretty much everything between Go and Rust and even TypeScript and Python, it comes down to developer experience tradeoff versus actual performance performance.
you know, trade off when you're actually running the application a whole lot of times. Uh, so if it's going to run a whole lot, then naturally the actual development process, the time that goes into writing it is just a fraction of the consideration probably. But if you're the only user and you're just writing a script that is just going to be run periodically and probably not use even a fraction of your system's memory when you're running it in the background, uh, you know, just for your command line interface, uh, script, like I have,
tons of those that are run throughout the day to get different things done, then it doesn't really matter. And developer experience is the most important thing because you may need to go there and customize it and stuff like that. And then moving from, you know, something like Python to Golang or Go, as we've been calling it, but you'll also hear people say Golang. This is the same thing. Essentially,
You get, you're kind of moving on that spectrum of like the developer experience is slightly more constrained, but it's also a way faster. And then if you move way off to the other side of the spectrum, that's where you have rust where everything is, you're importing something and everything has to be like deliberated about beforehand. And that's where you said having like a sufficiently written spec is
from your first version of the application that basically serves as your map for how you're going to actually write this code. And you've already maybe been running it for a while in production and you already have a good idea of like, this is more or less how it's going to be for some time. So you just want to rewrite it in Rust and make it a little bit faster, essentially, like use fewer resources, maybe make it like slightly safer and stuff like that. So,
Yeah, yeah, exactly.
Yeah. And of course, developer experience, there's always going to be a subjective element to that. For me, the developer experience of Go is going to be easier than Python because that's the main language that I write in. And it's going to be different for somebody else. Someone else's
eat, uh, just, just does not, does nothing but, but, but write Zig all day long, they're going to have an easier time writing Zig than they're going to have writing, um, a sufficiently complicated Python program. Um, so yeah, I mean, I, I would, I would say that's, that's perfectly characterizes what I was, what I was saying. Awesome. All right. So just, just to, just to recap what you're saying then for ideal projects, like let's say you have just, uh, a
a script with an audience of one, like you're just writing something only you are going to use, or you're making some random open source library that you're going to get onto GitHub and maybe somebody will, you know, use it themselves, but who cares? You're just moving on with your life. You're not planning to like actually maintain this, you know, Python might be a great thing. Or if you're already familiar, just whatever tool you're more, most familiar with, you might write such a script and go. Um, and then,
Rust is really just for doing rewrites. It's the primary thing that you said. I know that the Linux kernel project...
one of the most important and most sophisticated open source projects on Earth. They recently started allowing Rust for... I don't know the extent to which it was allowed in, but do you know very much about that? Can you talk about that? I know an awful amount about that, unfortunately. Give us the skinny. It's full of drama. That's the short version. It's full of a lot of drama. It has
has been allowed into one subtree of the kernel for device drivers. And it's a situation where there have been... First of all, there's a very successful project called Asahi Linux. The goal of it was to get the Linux kernel ported to the M-series MacBooks. And so there's...
There's a long saga of all the drivers and everything that were written for that project were all written in Rust. I think it was just after the announcement that Rust was under consideration for a language that would be allowed into the kernel. There has been a lot of drama over actually getting the patches that have already been written and are working accepted into the kernel. And...
Not to both sides this too much, but there has been a lot of drama and unnecessary grief over the whole situation. There's maintainers that say, I don't want to maintain a code base that has two languages. I read and write both of these languages. And just the fact that I'm moving from the responsibilities of maintaining and authoring a one-language code base to a two-language code base...
sufficiently increases the difficulty of my role enough to the point at which it's not worth doing for me. So I quit. Like, I'm out. Wow. That's it? And then, yeah. I mean, lots of codebases have multiple languages. I would say it's probably pretty weird. Yeah. Sure. But a compiled codebase that has as many eyes on it as the Linux kernel with as many...
paying corporations that are using it for everything and sponsoring and require high reliability. And we, we, we've had, uh, very recently, a couple of things slipped through all the way to, you know, uh, release candidates that, that really shouldn't have. Um, and there's just about like bad code. Are you talking about like attackers? Yeah.
Bad code in the Linux kernel. So, I mean, there's also very famously there is the XZ utilities stuff. That's not really part of the Linux kernel, so that's not really what I'm talking about. But there's been stuff where Torvalds will receive a patch or Greg Cage will receive a patch. Greg Cage is like the number two on the Linux kernel right now in terms of approving and releasing patches. And
they'll receive a patch and they'll say this should never have even gotten close to making it to me. Like it should never have gotten this far. How come it slipped past so many layers of review? Um, now to be totally fair, um,
Companies that develop, I believe, like MediaTek and Broadcom and Intel and AMD, like a lot of chip makers will submit large patches to the Linux kernel even before their CPUs are released just to make sure that they run day one in data centers and for consumers and everyone. And a lot of the time that's very obscure and obtusely written code that
Torvalds has admitted, like, yeah, I merged it and I have no idea what it does. I hope it works, but it's your tree branch, guys. It's your driver and it's your hardware, so it better work. So it's not like...
Having obscure code merge into the kernel isn't a thing that happens. I think a lot of people are complaining about it because they don't understand the Rust code that's being written or they disagree with the architecture of it. And they don't want their neck to be on the line for merging a bad pull request in.
Yeah. And I can imagine that. There's a lot of drama over that. I mean, device drivers, like, just, you know, just for clarity, like, things that you plug into your computer, essentially, whether that's a new, like, graphics card or whether that's, you know, a mouse. Like, everything has to interact somehow with, you know, like, at least indirectly with the kernel to actually work on a computer, right? So I can't imagine, like, you know, you and I are open source maintainers. I just can't imagine the...
complexity that linux revolved and and uh grid cage and like those those people are dealing with and it's got to be terrifying getting like a pr for i don't know thousands of lines of like c++ just like oh yeah i guess yeah well no no c++ in the in the linux kernel that one's not allowed yet oh it isn't okay what is no it's just c it's just c just c plus plus
No, no, no. Blindness is ideologically opposed to C++ for many reasons. Okay. What are some of the reasons? Is it memory safety or...
I think it's more the style of the language because you have every possible feature in C++. Like C++ has every feature from every other language ever made. That's a joke. There's no borrow checker from Rust in there or anything like that, except there's a new proposal out for something called Safe C++. So even if I try to make a joke about, oh, C++ doesn't have X, it's like, well, actually, hang on. There is a way that it does implement that. You can... With C...
You basically end up arguing over variable names and indentation and maybe where you put your curly braces. There's not a whole lot of options for writing, you know,
C can only be written in a few different ways. In C++, you've got objects, you've got templates, you've got all sorts of different patterns for writing code. I probably don't even know a third of them. Yeah, I know people that are C++ devs who feel like they still have a...
you know, fleeting grasp on language. Yeah. Yeah. So yeah, he's very, very opposed to, to that in, in the colonel. So it's just, it's just rust and sea. And then I think there's some, there's, there's inline assembly in a few places as well, especially for the cryptography, um, for the cryptography libraries. Um, but, uh, yeah. Um,
So that's the main source of the drama is just people don't want to have to learn a second language or maintain it. So essentially, my understanding is they were like, okay, we're accepting Rust contributions to the kernel. And then they realized, oh, damn, a lot of people... So there are a lot of work that's already been done that's waiting to get merged, and people are feeling like they got bait and switched. Yeah.
Yeah, a lot of it is, I wouldn't say it's just not wanting to learn the language. A lot of people are afraid of how it's going to cause the Linux kernel to change because people will depend. So there's a parallel tree for the Linux kernel as well that's like got all the Rust stuff implemented in C.
Um, because some people may not want to compile the rust version of the kernel. They may, Oh, I don't have cargo or I don't have, you know, anything. I don't have, I don't have the tool chain on my system. I'm not going to build that one. I want to stick to the tool chain. That's always worked. I've got pipelines that run, you know, like build pipelines and CI maintain a parallel version. There is a parallel version of the kernel that does not have any rust. Now I'm not, I'm not saying that they're one-to-one feature, uh, like, like one-to-one on all the features they're supposed to be from what I understand. Um,
And so that has also drawn some ire because the people that are writing the Rust code will use a function or some functionality that can't be done in C. Or the people that are writing C will use something that would have to go into an unsafe block in Rust, which basically strips away all the memory protections there.
from the borrow checker in Rust, which is something you have to do sometimes and probably more often than you'd want in Rust. But ideally, you don't do that. And so there's just arguments on how things should be architected because if you're going to write the same code base twice...
Maybe you should also architect it similarly. So you don't have to context switch. You already have to context switch between languages. Now you've got to context switch between architectures used within one language versus another. But if you don't do that, then you can't use the features of one language versus the other. So there's, I think that's, I think that's more of what the argument is about is about, um,
the C maintainers and the Rust maintainers not agreeing on the way something should be done rather than just it being two languages. But there was absolutely on the mailing list, there was a fight that happened where a C maintainer left the project over not being able to
I don't want to say not being able to handle because that's a little bit rude. Not being able to... Basically not being able to deal with having more than one language. And many of these contributors are just completely unpaid. A lot of people probably get paid by a company. I am an open source contributor. A lot of open source is actually funded by big technology companies who have devs who help maintain these open source projects. There are also a lot of volunteers out there. But...
this person, were they just like a pure volunteer when they left? I'm not sure, actually. And they didn't leave Linux kernel development entirely. They just got removed from that group. So they moved. I don't even know if they moved to another group. I think they were maintaining multiple different groups within the Linux kernel, and then they left the device driver's tree. Okay. Yeah. Interesting. Well, we could talk about this for a while. I've learned a ton of stuff so far. I want to get into more stuff.
uh as i promised uh in the intro that i haven't yet recorded but i'm going to promise this in the intro that i'm recording that we'll talk a whole bunch of technical stuff and then we'll get into more uh general actionable tips for uh people who essentially want to get in the position that you are so let's let's start with the autobiographical background of you ty groot and yes your last name is pronounced the same as the guardians of the galaxy tree based life form spelled the same way too yeah um
Well, I mean, how far back do you want to go? So, I mean, I've been writing 13, I think is when you started coding. Well, well, so my, I guess my story would start when I was, was eight years old. Um, so I was originally, originally I was, I was eight years old, but I was, when I was eight, I was really into creative writing. Um, and I, I really wanted to write a story. I wanted to write a book. Um, but it just didn't seem right to me that I had to write it down on paper and paper could be lost and I couldn't make a copy of it without handwriting it over and over again. Surely there must be a better way. Um,
Um, and my parents had an extra computer, um, in a spare room that we had. Um, and I basically just asked them like, can I, can I type my story into the computer? And then I can print it out on two pieces of paper instead of one so that I don't lose it. I'm not really thinking of files or anything or like this is before flash drives were as ubiquitous as they are now. So it was just like, I can, I can connect this to a printer and I can print two copies of it, but I only have to write it once.
So I was writing a story. And then as children sometimes do, I decided, oh, you know what? I don't actually want this story to be about pirates. Now I want it to be about monsters or whatever. I was changing the story. And because it changed the story, I wanted to change the title because the title didn't fit anymore. So I had seen my dad do this many times. I hit the minimize button. I go to the desktop and I rename the file and I hit enter.
And then my file doesn't open anymore. And I don't know why. And Microsoft Word, what, 2003? Microsoft Word 90? I don't remember what version of Word it was. It just crashes. It doesn't open anymore. I can't open the file. I would always open my story by double-clicking on the icon, and it didn't work anymore. So what I actually did was I didn't add .doc to the end. This was before .docx. Oh, okay. I didn't add .doc to the end. And back then...
you know, windows wasn't as handholdy as it is now. And it actually lets you rename the file extension without prompting you. Um, so the, the, the association between that file and word was broken, um,
And if I tried to open it, it would open with notepad and you just see like a bunch of XML and, and, and, um, not, not binary, like ones and zeros, but like, um, like, uh, base 64 encoded, you know, like jumbled, uh, high entropy characters, just like all over the screen. And I'm eight. I don't know what I'm looking at. Um, but that's kind of what hint like hinted to me like, oh, there might be more to a computer than just typing letters in and seeing, you know, stuff on the screen. Um,
Fast forward years later, I had a friend in middle school through high school. His name's Jonathan or John, as I called him. He was, his dad worked for a company that did a bunch of like very highly technical company. I'll leave it at that. And he,
Yeah, and a fork bomb is essentially like a procedure that just keeps topping itself over and over.
Yeah, it's a program that will call itself twice. Okay. Yeah. It'll call itself twice, usually in a separate thread so that you have one, then you've got two, and you've got four, and then you've got eight. And eventually it'll consume enough system resources to, you'll run out of memory. And there's ways to protect against that on modern systems now, but...
back then it was, it was enough to crash a computer. Um, and so that was, that was, that was fun. Um, and then in high school, um, I had a teacher, Jason Parker, um, at, I went to Valencia high school, um, for the first two years of high school. And, um, that was a, he, he, he was teaching us there a, a, a Java class, a, uh, AP computer science. Um, and, uh,
That was a really great class. He was a really great teacher. He had answers to everything and he was always right. He'd answered every question we'd asked a hundred times before. And there was always like something that you could do to finish the assignment and then something you could do past that for like,
extra credit, but it wasn't just like extra credit. It was like, he, he was, he would be really invested in how you solve the problem as opposed to just solving the extra credit problem. Um, and he really got us all thinking about how computers really work on the inside. And now keep in mind, this is, this is in, in, in Java even, um, where you're, uh, a Java virtual machine away from bare metal. Um, so really thinking about, um, how objects are stored in memory and all that sort of stuff. And that was, that was really helpful. Um,
I ended up actually getting a job right around then as well. I had a contracting job that I had been doing since I was around 13 to 12, 13, somewhere around there. So you were making money as a dev, essentially. $8 an hour, but yes, I was making money. Well, $8 an hour is not bad for a 13-year-old. I grew up in Oklahoma City. I didn't grow up in California. So like...
I don't think I had a job that paid more than $8 an hour until I was like 22 or something like that. Like all my jobs were like less than $8 an hour. But cost of living was obviously a lot lower. But please carry on. Yeah. So there's a family friend who needed help with something. They were running a small contracting company. And they were just really busy, right? It was two dads that both have large families. We're talking nine, 12 kids each. And they just didn't.
Sorry? Yeah. Cheaper by the dozen. The Steve Martin movie. Yeah, yeah. They just didn't have time to do some of the more boring things. But for me, it was exciting. So one of those was like, they wanted to figure out how can we triple boot Windows, Mac, and Linux on a Mac mini?
Right. So I remember I pulled a TV into my garage and had the Mac and the keyboard and all my different flash drives, my different boot ISOs. And I was like trying to figure out, okay, so if I boot and I install Windows first and then Linux into the dual boot partitions for this Mac, that works. But if I do Linux first and then Windows, Windows stomps on top of Linux. So then I have to redo Linux again. And I think by the time I ended up handing it to them, it was like quadruple booted between –
It had Ubuntu and then it had Debian and then it had Linux Mint. And then it also had, I think, Windows 7 at the time and then macOS, whatever version it would have been at the time. Yeah.
And then they also had these small Python scripts that they would throw over to me and say, hey, can you fix this or can you make this work or whatever? So I did some of that work with them. A lot of soldering and a lot of circuit board assembly as well. And I ended up actually getting my first full-time job from them as well. So when I was a little bit older, their...
contracting company ended up getting effectively acquired by a small elevator and equipment company that was trying to do some Skunk Works back of the building IoT program before the word IoT was even coined. Skunk Works is a reference to... Which contractor was it? It was...
Was it Northam Grumman? It was one of the big contractors, but they built this tent out in their parking lot and they had people work on... I think that's where they developed the supersonic jets. Yeah, yeah. Yeah, but basically when you say Skunk Works, just for anybody who doesn't know what that means, it's essentially when you have your normal company, it's still business as usual, but you've got this separate thing out back where you're kind of like doing your moonshot type thing.
Google had the X project, or I can't remember what it was called. X Labs, yeah. X Labs, yeah. So this is where you're doing completely experimental stuff that might come to something or might not. Yeah. Elevator company.
Yeah. So by, by day they're doing elevator maintenance and then, um, there's this back room that we had. They, they hired a whole bunch of college students and then me, um, and then, um, the, the two people that ran the, um, the, the contracting company I was with earlier, um, they, they hired all of us to build, um,
um, internet of things or IOT, like small embedded circuit boards with onboard wifi that could connect to, you know, like the wifi at a, at a CVS or at a Walmart. Or, um, we also did personal and private elevators for like retired basketball stars with bad knees and stuff. So like maybe their, their home wifi. Um, and, um,
It would track how many times the elevator goes up and down. It would track the vibrations. We'd plug directly into the PLC, or I don't know what it actually stands for anymore. We always just called it a PLC. Like the main board for the elevator. Yeah. The controller. And we would track...
Every little interaction, every metric that we possibly could, collected over time. And this is before machine learning was in the popular landscape that it is today with open AI and everything. We were trying to build a model to determine when can we predict with high certainty that your elevator or your escalator or whatever, your load dock, your lift, is going to break before it does.
so that we can send a technician out for preventative maintenance and then you have zero downtime or you have 15 minutes of downtime as opposed to waiting for us to get there.
Um, and so that was a, a massive undertaking. It was a four or five year project. Um, they ended up, um, I don't know if I'm allowed to tell how that story ends. So I'll probably, I'll probably leave that there, but that was, that was my main, uh, main employment, uh, for three or four, maybe five years. Um, and then fast forward through while you're fast forwarding, just a couple of notes. Uh, uh, I think he said PLC program will logic controller, uh,
Probably. That makes sense.
And then the other thing I was going to say is it was not Northrop Grumman. It was Lockheed Martin. Yeah. The other defense character. That's where they had the other one. Yeah. This kind of works. So I just like to not have misinformation on the podcast. And if I say something I'm not sure about it, I'm going to look it up and correct myself. So please carry on. So how old are you at this point?
Oh, not sure. I guess it would have been in high school. No, that job carried me through the first couple years of college. So it would have been about 20, I guess. Yeah. And keep in mind, the first few years of college, so you did college concurrently with high school. The last two years of high school, you were able to also attend community college? Yeah.
Yeah, so I guess I effectively did five and a half years of college because my junior second half of my junior year and my whole senior year, I also went to community college. I went to Fullerton Community College. And so I actually graduated early. I believe the beginning of my junior year, I tested out and did the there's like an exam you can take in California to get a diploma equivalent degree.
The GED? Is it that? I don't think it was the GED. I think you get a GED, but the test that you take is different. I think I do have a GED from that. I have a GED too. I left school and then took the test. It was a lot easier than finishing high school. But I'm not encouraging people to do that. You can't get scholarships and stuff like that if you have a GED generally. But
Well, I didn't have that issue. I actually got the presidential scholarship at Santa Clara University. So my tuition was covered with the GED. That's great. So I would advise people to do a little bit more looking into that and make sure – because I'm sure it does disqualify you from a lot of them. That wasn't a problem for me specifically. But I also didn't just drop out of high school and then go take two gap years or something. I didn't.
Oh, okay. If you want to call living in your car and working at Taco Bell a gap year, sure. It's not like I was traveling around Europe. But yeah. No.
No, I, I point, point being, I went to community college for the remaining. So I still did like a four year education during high school. I just spent, um, the, the second half of junior and all of the senior year at a community college, which apparently was the wrong way to do it for other reasons as well, because in California, um, with some community colleges, they'll have a program and my brother and sister did this. Um, so I, I know from personal experience, um, it's,
You are allowed to attend a community college for free. If you are still in high school, um, the courses are, or maybe it's like $20 or something. It's really cheap or free. I can't remember. And I can't even remember if the $20 was like for meal points or something. Um, and by getting my GED and graduating, I had to pay full price, um, which, you know, for California community college, it's all highly subsidized. It's actually relatively affordable still. I think it's still, you know,
It's a few hundred dollars. Yeah. It's not the- Which is nothing in the grand scheme of things if you're unfamiliar with the cost of US higher education. Yes. California has something called the Master Plan, I believe is what it's called. And it's essentially like they created all these community colleges and they're basically the best community colleges on earth. And they're also incredibly affordable. If you're looking for affordable education, my humble advice would be go get California Residence.
And send your kids to California Community College and then see if they can transition from community college to a good research institution like you attended or a California State University schools. Those are great, too. But California's got just like chef's kiss, like best higher ed you can get, basically. And a lot of it's public.
Yeah, the community college that I went to was great. Yeah, again, especially considering the price. It was great. And they also had all sorts of community events. So one other thing that I went to is before I even started my computer science class, even in high school, there was an event that they were offering. It was led by Llewellyn Falco, if you know that name. He was one of the...
he was, he was an engineer at Google for a while. He was one of the, the Google glass ambassadors, um, back when those first came out, the, their augmented reality glasses. Um, and he, he did like a, an intro class to computer science. I was actually in between chemistry and computer science at that point. I wasn't sure which path I was going to take. And that helped kind of solidify and push me towards computer science. And then high school kind of solid, like crystallized it completely. Um,
And that was all offered through the community college. And I think that was like $5 to join. It was like an, it was like a registration fee or something like that for $5. Um, so yeah. And then, um, in, in college I ended up, you know, after, after the, um, the work with that elevator company, um, kind of shut down for me. Um, I, uh,
ended up getting a job with a company called CellPoint Systems, which is actually my current employer right now. And that might be confusing because we spoke ahead of the call and I mentioned a few different things. I have a lot of irons in the fire, so I've
My primary employment is with a company called CellPoint Systems. And we had a large contract with a FANG company that has a lot of love for the Go programming language. So in order to help kind of secure that contract, we promised them we would write everything in Go. That ended up being how I got introduced to Go. So I guess I'm more of a relatively recent convert, only about six or seven years, maybe eight years of Go. But...
Yeah, so we started using Go. We started using Go before there was a Go package manager. The Go module system didn't exist yet. So it was really rough when we first started. But it came out almost immediately right after I started writing Go. So that was very nice. Probably the best thing that they've added to Go is the package management is really...
really, really good. Um, especially because you don't have to have a registry. You don't have to have anything. Um, you don't have to like use pip or have like the NPM registry or JSR or any of the other, that's the JavaScript, uh, registry that's, uh, tied in with Dino. Um,
you just have a get repo with a public URL. Congratulations. If you tag your get commit with a tags, numerical release like V one dot one dot one or one dot one dot two, and you have a public going package that can be used and imported by anybody. Um, that I think is one of the strongest points of, of goes ecosystem right now. Um, so it's gotten a lot better since I started working there, but that, that was kind of my, my introduction is when I started work at, at cell point systems. Um,
Over time, that contract grew and expanded and grew and expanded. We, we, we grew the team. It was initially just a couple of us. I blew out to maybe eight or 10 of us at, at the largest point, including all the hardware and the software engineers. And then I ended up leading that team over time. The head of that team started becoming busy with other things. And I ended up taking over the team and I,
I ran that project for several years. The contract finally came to an end. We had renewed it like three times and then it's finally like, okay, there's no need for renewal anymore. That project kind of came to an end. And then the president of the company, it's a very, very small company. First of all, we're at most 20 people and usually around six or seven, I think.
The president of the company, his name is Rich, is – because it's such a small company, he's my boss. He's the owner of the company. He's my boss. But he's also kind of like my friend. And so he and I got to talking. We started having some ideas about some of the pain points that we had during this contract with the FANG company. And a large part of that was due to SaltStack. Yeah. SaltStack is a great program. And you've written an article basically like breaking up with SaltStack. Yes.
Yeah, exactly. No longer best friends with me. Yeah. Yeah. So I won't go into full details there. If you want to, if you want to read about, um, the beginnings of garlic. Yeah. Uh, he'll put a link to my, my blog, but, um,
I'd used salt. I was introduced first to salt stack at that equipment and elevator company and kind of brought it with me all the way through to the, uh, through the, through the role that I had at cell point. And, um, there was just one thing after another, there was like a major critical vulnerability that in my opinion was not addressed properly. Um, and then there, you know, we were getting crashes, um, at runtime, uh, due to high memory usage and a bunch of other things. Um,
I was talking to Rich and we're like, well, it would be really nice if we could solve some of these problems because I'm sure we're not the only ones having them. You know, SaltStack is used everywhere. I think, I think all four, four of the old thing companies for four of them were using SaltStack and then a bunch of other like, you know,
Fortune 100 companies are all using it. Fortune 500 are all using SaltStacks. They're like, well, we can probably address this problem and then to drag it back to the beginning of our conversation, bring it from an interpreted language like Python, which that's what SaltStack was written in, to a compiled language like Go. And it'll take a little longer to write it initially. It'll take a little longer to compile it at build time. Fine. But the end result is we get a single binary that can be just dropped onto a system without having to use apt
or RPM to install a whole library of packages and dependencies at runtime. So when you're dealing with something as important as rolling out servers and making sure that all of your, your developer operations and everything is all like solid. And when I hit deploy, it actually deploys and it goes out and we have a server spun up. Um,
The fact that you have to install 30-some-odd dependencies alongside SaltStack to get SaltStack up and running and then do configuration of it as opposed to just dropping a binary, dropping a systemd service file, and saying systemctl start and just having it run, there's a clear advantage to using a precompiled static binary. Yeah, so it's just getting things live faster and not having to worry about those dependencies. Dependencies do break sometimes.
And if you're like in our experience, they broke all the time in production. And a lot of these devices that we're dealing with are on cellular connections in a remote desert. And now we're just hosed.
So we can't deal with that. We need something that's going to be stable. And you have a trade-off between, am I going to turn auto-update on on all these remote systems so that we get security updates from Ubuntu or from Canonical? Because that can bring it down too. And lose the stability? Or do we just stay vulnerable and then we only do patched updates when we know that they're tested? Yeah.
And the problem is when you have a lot of devices that have to operate in the field and they remain connected to the network, but that network may lose connection to the internet. For example, like in a retail store environment or something like that, you're still susceptible to incoming attacks on that store network. You've got customers connecting to the wifi, but you don't have the ability to auto update or push controlled updates at a set time. So your best bet is going to be auto update then to maintain security and
but then you lose stability. So it's all these different trade-offs and stuff, and I really wanted to avoid that. So that's part of the main thing behind Garlic. Okay, so Garlic, spelled G-R-L-X, um...
Basically, the reason for it to be is you can have a binary that you can just drop onto a system. And then when you need to do some sort of security patch update or something like that, you're not actually going in and running a bunch of procedures. You're just essentially flashing that with the latest dump of the binary.
Yeah, you just have a recipe file, and the commands are streamed down from a control server down to the target device or devices. It's meant to be used in a fleet, fleet being the word that's kind of taken from an
or in a car, like other fleet vehicles, right? You've got a fleet of devices and you are pushing out a change to either all or a subset, like an A-B test group. You push out a recipe change to some set of devices and that set can be all of them or just half of them or a couple of them.
Okay. That makes a ton of sense. So I can like, just, I think you created a very congent argument for why what you're doing is so much better than the old SaltSide approach. Uh, first of all, the fact that it was written in Python, we already talked about like the performance trade-offs of having it written in Python. If it's written in go and like you're actually doing everything and then it doesn't even really matter what happens. Uh,
prior to preparing that binary that you're putting out there. It's just that the binary does run, and you can be sure that the binary runs and that it's secure and everything like that before you... Well, I mean, you can't be sure that everything's secure. But to the extent that you can be sure, you can have it ready to go, and then you just deploy it out to your fleet. And as you said, some of these could be like a freaking, you know, cell phone, CDMA, or whatever. Smartwatch, yeah. Yeah, like...
devices with low power, with low connectivity, where you can't necessarily control all the different circumstances, but you can get this out there and it's just like a much more sensible way of doing it than having to. And so it sounds like those are the main,
for using something like this. So this is the open source project that you maintain. And my understanding is it's essentially sponsored development. It's partially sponsored by the company that you co-owned. And then it's also, you have a few organizations using it and people that donate.
Yeah, yeah. So there's some of the more popular or some of the companies that are interested in having their names on our README are in our README. So we've got a couple of data centers, I think, in France. And then I think Google is using it and a couple other companies are using Garlic. It is maintained officially by Atatomic Inc., which is the company that I mentioned earlier. I co-founded with Rich, the owner of CellPoint Systems. Yeah.
And, yeah, so we have corporate backing in that sense. That tends to make people feel a lot better. The people that are making the decisions for switching a DevOps platform from something like SaltStack are going to want to see that, like, Garlic isn't going to just disappear overnight or go away. The other really nice thing, in my humble opinion about Garlic, is that we've zero BSD licensed. Right.
which is a massive sticking point for a lot of FANG companies. I know that Google has a policy that you're not allowed to use any open source software in your code. That is not written by Google. That doesn't have approval from the legal team. Okay.
So BPL is out the door immediately. AGPL out the door immediately. MIT software still requires approval within Google. Really? MIT is considered like MIT. I always consider that to be like basically the gold standard of openness. And we use BSD three. Yeah. So you're using BSD zero. Yeah. So we're using zero BSD. So, and I just want to be clear, they'll usually approve it if it's MIT, but it does still require approval.
But what about BSD? Does that require approval? No, it does not at all. That's the reason why I ended up going with it is because zero BSD takes away the only clause that the, the only two clauses that are in the BSD license. So basically it just says like,
Here's some software. You can take it. You can even rename it. You can remove the attribution to all the authors. You can do anything you want with it. Here's some software. Have fun with it. There's literally no restrictions. No restrictions whatsoever. Don't sue me. Basically, we just don't want people to like, oh, okay, so I'm not going to be held liable. I say don't sue me. That's the reason for using this is not having any sort of licensing.
Yeah, and I do maintain copyright over the word garlic and over the mascot. We've got a mascot named Clove. I'm actually wearing him on my shirt right now. So it's like a gopher with like a – just verbally describing it. It looks like a clove of garlic. Yep. It's a clove of garlic with the gopher eyeballs and nose and mouth. So –
Yeah, I maintain copyright over Garlic and over the mascot Clove, but otherwise the code base is 0BSD. And it's not really because I want people to take the code base and run with it and rename it and then go make a million dollars off of it. If you do, please send me a check. But it's more just to get over that legal hurdle and make adoption easier. Yeah.
Yeah. Cause my, my main customers are going to be people that have fleets of devices. That's not going to be like users, right? It's going to be companies that are making a decision on, do we go with this, this system or that system for tens of thousands of devices? Um, cause that's real where garlic really shines. So if, um, if that's my audience, I need to make sure that I'm not introducing additional legal hurdles for them. Now, if you want support,
like a service contract, you want support, then you can pay money for that. That's part of the reason why we have Atomic is we can sign contracts through Atomic for windows of time that you want allocated to you, response time, all that sort of stuff if you want us on call. And then additionally, Garlic is shipping a plugin system. So
Garlic by default requires no plugins and requires no dependencies. That's like a huge selling point of it. But there does come a case where building everything into the main binary will increase the bloat of the binary unnecessarily for people that don't need all those features.
So, um, we have in the works, a WASM plugin or web assembly plugin system that allows you to do things like connect to an LDAP server or connect to like allow you to do more fine grained role, like role-based access control rather than just users and groups, which is built in. Um,
if you need to hit like a certain storage backend, like we, we, we built in, we support pulling files from S3. We support pulling files from FTP, from SFTP. We support pulling files from all these different backing ends, HTTPS, um, FTP. But if you need to pull it from like a Google drive, okay, that's going to be a plugin. Cause that's not something that most people are doing in their deployments. So essentially like you'll, you'll build plugins for hire, right?
Yeah. Yeah. So the bounty, like they can be like, Hey, I would love for this to have a Google drive integration. And then you can be like, sure, I can make that happen. So we offer that as well. Essentially like offset me for the development cost. Yeah.
Yep, exactly. So those are our two monetization strategies are the service contracts for support and then feature prioritization on the plugins. So we're mostly focusing on Wasm. I've been doing some exploration recently on Shopify has a Golang implementation of Lua.
Um, so adding like a, a Lua plugin, uh, runtime as well. Um, but that's not promised yet. That's not really on the roadmap yet, but I've just been evaluating it to see if it's worth doing. Okay. So what I'd like to do is kind of generalize out the advice that you have here, uh, like how you've approached this. So if I can, I can make some observations. One, basically do everything to simplify adoption by, you know, the kinds of companies, like if you're doing like fleet, uh, you know, uh,
I can't remember the exact term you used, but essentially just like huge arrays of different devices that are maybe really small devices that don't necessarily have like a whole lot of onboard memory or onboard storage or something like that where we're
just having it be extremely efficient is, is really important. Yeah. Like I know you said the company that you work for has like cell in the name of it is like a cell phone tower type company or something related to cellular. It comes from that. Yes. Initially. Yeah. So, so like with, with like a huge kind of like fleet of cell towers and it, would that be like a potential use case? Well, like CubeSats or like, like what kind of devices are we talking about when we talk about fleet devices? Yeah. I mean, we're,
Yeah, we're talking like mini PCs and display controllers for the most part in terms of Internet of Things. But Garlic is not really geared specifically towards IoT. It will work really well with that use case because that's my main background.
But I also have a large background in doing cloud VM server deploys. And actually, just behind this door here, I don't know if I can open it, but I've got a giant baker's rack full of just Intel NUCs, just like rows and rows of them that I do stress testing on for Gartlet. Very cool.
Yeah, I'm running probably 35, 40 different machines just on the inside of the store. You said, I don't know if I can open it. Could you try it? I'd love to see it. Why don't we hold that for the end and we could show it later because I don't want to edit it out. Stay to the end if you're watching the audio version. If you're watching the audio version, I'm going to verbally display – verbally share everything that I'm seeing. So just to finish what I was recapping before I went off on that digression of use cases, like who used this. Yeah.
essentially using the zero BSD license to make it really simple for Google and other big tech companies to adopt without having to take up their lawyers time, uh, getting clearance, uh,
making sure that it's totally free, totally open source, that people can do whatever the heck they want with it. But the goal is not to sell the software. The goal is to sell services that support the software in the sense of feature prioritization and in terms of actual contracted support. A lot of people do not want to just grab some tool off the shelf and not have any support in a situation
essentially rely on their own engineering team to go and debug and fix things. They want to have the actual people who built it who can help ensure that everything's going smoothly, and that's where service agreements come in. So what you are essentially doing is free open source software that has dimensions that make it sustainable. You're not doing this out of
The goodness of your own heart, obviously, you're probably doing that. That's a big part of it, but you're also wanting this to be a sustainable long-term operation, so you've kind of baked that into the perennial business model of this open-source endeavor.
Yeah. Yeah, exactly. I mean, it's, it's going to be a while before it's something that can be sustainable on its own. Um, rich and I have put a considerable chunk of money into it, not just into, uh, not like ignoring my development time completely. And I think garlic's been around how, how long now? I don't even know. It's been at least three years, maybe. Yeah. It's been years. Um,
I don't know when version one was officially published. Version one might have only been published three or four years ago or the first public commits, but it's been in the works for four or five years. And then we've spent money on marketing, T-shirts, conferences, all sorts of stuff, trying to meet with people that are in the right space on this. And I would say that's probably where we've had the most success is we go to, or at least I go to GitHub universe every year.
Um, and I go and I meet with the vendors and it's really important to be respectful of their time because they're there to sell other people on stuff. And then I'm trying to sell the people that are paid money to sell stuff. So I, it's more just like a, a quick introduction and like, Hey, can I talk to you later? It's not, I'm not going to pitch it right now. I'm going to talk to them and say, Hey, I've got something that would work really well with your service. Um,
Can we set aside some time to talk about this when you're not on the floor right now talking to other people? Because they paid money for their ticket too. You've got to be respectful. It costs a lot to have a booth.
Oh, yeah. To be an exhibitor costs money, and you're there to- You should know, right? Haven't you had one with Free Code Camp a couple times or no? So we always get, because we're a charity, they'll just give us- You get them for free. Yeah, but we're not trying to sell anything. We're just raising awareness of the various open source endeavors within Free Code Camp. But, okay, so that's useful, actionable kind of sales advice.
Don't... Like, be mindful of the fact that those people are there with a different mission and just kind of step in, introduce yourself, and then that's a warm contact. When you follow up later, they're going to be like, oh, yeah, I'm that guy that dropped by, and here's what I was talking about here. Or can we meet and grab, like, lunch, you know, after, like, the exhibition hall closes or something like that? Or I guess it wouldn't be...
lunch, but like, how do you, like, I guess a lot of these companies are in the Bay area. So it's not, uh, you're in the Bay area. So it's not, uh, out of the ordinary for you to like drive or take the bar or something to Caltrain to go meet up with somebody in like, you know, San Bruno or something like that, wherever they happen to be.
Yeah, I mean, I've actually been doing a lot of it over the phone. I've had better success with that than driving out to meet with people in person. And I think part of that is just, you know, for the types of partnerships that I've been going after, people have really seen it as more of a partnership than either one of us selling the other on something. And yeah, I mean, it's been...
it's been good for everybody on, on both, on both sides because, you know, I can look at it in terms of like, okay, well maybe I'm not going to get, you know, a service contract out of this, but I've got another user and I've got someone else who's excited about this because it's an exciting project. So you've got kind of like a range of like possible outcomes and just getting a user is still a win. Getting a service contract would be like the, the best potential outcome. Yeah. Yes. I mean the, the, um,
When I get another user, because of what this software is and what its role is, a user isn't one computer running Garlic. It's going to be hundreds or thousands. So the number of compute minutes that Garlic is running on different services, I mean, ideally it's low per...
per system because I'm trying to maintain low resource usage and everything. But the number of runtime hours that Garlic gets, I see that as a nice metric to go for. And being zero BSD, I have no telemetry. So I rely on people to tell me that they're using Garlic before I know. There's a community Discord and people start pouring in and saying, hey, I've been trying it out. I like it. I don't like it. Here's what I don't like. Here's what I do like.
Um, and then we take, take the advice under, you know, take, take the advice under, under consideration. Um, there's no telemetry or anything though. So other than on the garlic website, like the docs page, like has like a, like a post-hog integration or something. So I do see people that are visiting the docs, but the binary itself doesn't, doesn't do anything like that. Um,
So, yeah, I mean, there's plenty of things that would get me excited. If someone was just using it, that's nice. Service contract, of course. I'm not going to say no to money. But, yeah, I really want to get Garlic to become like...
I guess household name is the wrong terminology. Yeah, because it is a very specific industrial tool. Yeah, yeah, yeah. Like, I don't know, like CUDA isn't necessarily like a household name, but it's very important. Yeah.
Yeah. So I want to, I want to take on the market share of, um, and you know, salt stack Ansible puppet and chef. Like I want, I want the list to be salt stack Ansible puppet chef garlic. Like I want, and who cares what the order is, but I want to be on that list with everyone else. So, you know, even being in that room is an accomplishment in itself.
Okay, that is helpful context. I want to talk a little bit about your consultancy and how you've been training junior devs and essentially prepping them
to ramp them out, to do client work. Uh, can you talk about that a little bit? Because I think a lot of people are going to be interested. First of all, a lot of people listening to this would like to enter the market as devs. Uh, and I must say you're in the right place. This is precisely the kind of thing you should be doing with your time. If you want to become a dev is learning from people like Ty group who've been, uh, out there, uh, in, in the perennial minds, uh, getting things, uh, bringing things up to the surface. Um, uh,
Can you talk about that kind of approach that you have discovered over time and what you've learned from bringing people on, teaching them, and getting them ready to work with clients? Yeah.
Yeah. So in addition to CellPoint Systems and NatAtomic Inc., I also have my own single-member LLC. It's called Tiger LLC. I ended up filing an LLC primarily just so that I could get an office building, so I'd have a place to...
to, you know, really focus and, and, and work. It's, it's, it's like one city over from where I live. It's, it's not, not too bad of a drive. It's like 10 minutes. Um, and I just wanted to have like a space to get, like, I have tons and tons of equipment here. I've got like 3d printers and multiple machines. And like I said, a baker's rack in the back with, you know, a couple dozen systems. So I filed an LLC just so I could get like
I don't want to call it a creative space and sound like all hoity-toity and everything, but I need some place that I can sit down and this is the place for work. And so I filed an LLC and then not long after that,
A friend of mine, his name is Scott. He and I work with a lot of the same clients. I was consulting as an individual before that, but he and I were working with a lot of the same clients. And we kind of realized there's a lot of work here that isn't necessarily boring work.
but it also doesn't require us to be firing on all cylinders just to like to get it done. So it's not really like tedium, but it takes a long time and we're using maybe 30% of our, you know, our capacity as developers to build, you know, some of these, these, these features that we need. So we started looking for like, okay, is there anybody that we know that we could kind of hire for this role? And yeah,
the answer was like, yes. Um, live in the Bay area. Everyone here writes code. Not really, but it feels like that sometimes. Yeah. I mean, when you walk into a cafe, everybody's got freaking BS. Yeah. You're at, you're at a bar and someone's got that. That's me half the time. So I should, I should pay attention to who's talking, but, um, yeah, you walk into a bar and there's a split keyboard out. Um, but, uh,
It's like, okay, well, everyone here writes code or knows something about code or could potentially fill the role that we're looking for. The problem is the cost of living here is super high, and a lot of our clients aren't here. So they don't have that cost of living baked into all their expenses. They're not expecting to spend as much on a developer. They're not expecting to hire somebody at $250,000, $280,000, $300,000 a year. Right.
They want to hire somebody or get work for somebody at an hourly rate that would convert to something closer to 70, 80, 90. And that's not a bad wage, right? Yeah. Not outside the US. In California, it would be pretty difficult to live with that. In California, it's tough. Yeah. In the US, where I grew up, Oklahoma City, that would be perfectly respectful. You'd be very...
pretty well off if you can make like 80k a year but then if you go overseas to Europe or to Asia or to Africa like suddenly you're like the richest person you know basically with a wage like that in some places yeah
Yeah. So that kind of made me shift our focus to like, okay, we can hire somebody, but then the client wouldn't be able to afford them. And they can't, they already can't afford us. Like there's stuff that they want to do, but they're like, Oh, but we, maybe we'll do it later. Like we can't really afford to do that right now. That's the kind of situation we're working with as well.
Um, so we're like, okay, instead of hiring somebody from the Bay area, maybe we can, you know, expand our horizons a little bit, not look for somebody here. And instead we'll look for somebody that's, you know, somewhere else in, in the United States. So the first contractor that I brought on is actually from, let's see, let's say, um, central time zone, right? Um, so South, South of, uh, South of Kansas and central time zone. We'll put it, put it, put it like that. Okay. Um,
And, uh, she's somebody that I knew, um, in like in childhood. Um, and she's, uh, was in that same computer science class with Jason Parker that I took in high school. She was a year ahead of me, but she was in the same class. Um, so I kind of knew that she would probably have a somewhat similar background to me on, on some things. Um,
We were on a robotics team together in high school as well. So I knew she kind of had a mind for problem solving and for asking questions. And that for me, I don't think I'm being original when I say this, but I think that for me is the most important thing is not what you know today. It's more your mind for asking questions and learning new things and figuring out the right way to do something. I'll co-sign that. Yeah. That has been my experience as well.
Yeah. Way, way, way more important than how many years you've been writing a language. Um, because the person who is interested in learning the tooling, learning how to do something correctly, um, and constantly asking those questions and upskilling, um, is going to be able to pick up a language faster than like pick up a brand new language and be better than you at it than someone who's been writing it for, you know, writing the same language for five, 10 years. Um,
Because it's like a mental and like a personal evolution over time. You just keep upskilling and getting better. Acceleration overtakes speed. Yeah, absolutely. That doesn't actually make any sense. But that's what I mean. Like if you're – somebody is going like way up here and you're going like this, you will eventually overtake them if you're working on it. Higher order derivatives for the win. Yes. Yeah. So –
I reached out to her and asked like, Hey, what do you, what do you, uh, what are you doing these days? Are you, are you like, what, where's your work? Oh, I just finished school. Oh, was it a computer science degree? No, it wasn't. It was, um, I think she wouldn't mind if I say I haven't said her name or anything. So she, she, she actually graduated and finished school, um, uh, pursuing a nursing degree. Um,
She had some electives that she had done for networking and database management and a couple other things because that was always an interest. But that wasn't the degree. Those were just electives, as far as I'm aware. And...
So I ended up saying, hey, well, what's your job prospectus where you're at? And she kind of talked to me a little bit about that. And I was like, okay, what if I paid you double that? And what if you could start working with me next week? I've got a client that...
has such and such budget. Um, and I, I shared all the numbers with her. I was very open on like what the client could afford, what the client could pay, what I would keep in the middle, all that sort of stuff. Um, I try to do that with everybody that I bring on. I think that's a good, uh, cause then you don't ever have to worry about them finding out. Oh, like, or like cutting you out. You know, that's another thing is that you have to worry about when you're, when you're kind of like a middle man for lack of a better word. Yeah.
Yeah, yeah, exactly. And I've dealt with that at another company in the past, but that's a side story for later. So I said, okay, great. She was up for it, said we would give it a try. Great. And I said, okay, here's the thing. You've not written code professionally ever, and you've not –
you've, you've not actually, this would also be like her first like real job outside of, you know, whatever she might've done in high school that I don't know about. Right. Um, so I was like, okay, um, here's, here's what we got to do. Um, before I feel comfortable handing you over to a client and saying, she's going to bill out at this rate, um, get these things done, go for it. And then just kind of like leaving her to be like, leave her to the wolves. Um,
I actually spent the first three months bringing her on, going through all sorts of onboarding and training. And so what that meant for us was I wasn't exactly sure which projects the client was going to prioritize. And like any end customer, they're going to need front-end and back-end code. You can't have a company these days without having a website, really. And they're...
selling online courses. So that's like a huge part of their business is their website. Right. Um,
There's also a massive part that's to the back end, like tracking registrations and who's coming and going and which classes are you taking, which classes get canceled, do we sending out transactional emails correctly, all that sort of stuff. There's like a massive back end to this as well. And a headless CRM interface, all sorts of other customer relationship management, and a CMS, a content management system as well. So you've got all these different bits and pieces that are built and running in the back end. And then
we're told by the client that they have these, these tasks and jobs that they want done, that they can't afford to pay us at our current rate. So I am now in the position where I have a developer that can make this the correct rate that the client's willing to pay, but I don't really know what their priorities are. Cause they wouldn't, they didn't tell me yet. Cause in their minds, they can't afford anything.
So I ended up training her full stack. We went front end, back end. We did data transfer back and forth. We built a small little app called Bridge Time, which will probably spike in traffic after this, which is just for like if you've got a friend who said, oh, just
if we had it, we have one of our mutual friends is very macabre in his sense of humor. And he said, if I, if I ever forget how to tie my shoes, I just want you to put me in a wheelchair and hook me over the side of the Golden Gate Bridge.
So we made, we made bridge time based on that. It's like, you can put in like, if someone breaks a promise or says that they wanted to do something or whatever, that, that, that type of agreement, it's like, it's like a joke. Then you send them the link when they, when they do the thing. And they, there's like this little animation that plays of a, of a guy wheeling someone over his head. It's all pixel art. It's very, it's not graphic or anything. It's just funny. Yeah.
So she, that, that, that, that was a joke site that I wanted to do for a while. So as part of our onboarding, I actually sent her a whole bunch of free code camp videos from the react track. There are some of the older videos now, I think you guys have reshot a couple of them. I saw one of them came out again, seven months ago. This would have been a while, a while before that. And basically gave her a starting point where she could work on. And that was a really good foundation for her, for her front end track.
Um, and then from there, um, I kind of brought in some of my knowledge and some of my, uh, expertise using, you know, next JS and a couple of other, um, you know, more meta meta frameworks for react, um,
Basically, he said, okay, the free CodeCamp video shows vanilla React. Here are the different things that you have to change to apply this to an XJS code base. And then we kind of carried on from there, and she built the front end for that. And then we moved on, and she actually took some courses. I paid for her to go through boot.dev, which I know you're familiar with them as well. Yeah, I've had a career in them.
Yeah, and I'm friends with Lane Wagner as well. Well, I don't know if you would call us friends. I'm acquaintances with Lane Wagner. And I sent her through the...
the, the go backend course. Um, and then she kind of filled in some other backend stuff as well. Um, and then we built, um, a backend and go using, uh, just SQL light hosted on a, on a $5 VPS. So that, that whole site, she built everything there. I made some tweaks and changes and kind of gave her advice and nudged her along. Um, but basically I, I use some of these other, these resources. And she's also, like I said, a very self-driven person. And that's, that's the most important thing to me is that you are willing to put in the work. Um,
And then I'm more than happy to lend a helping hand and point you in the right direction towards the right tools. That was the first person that I contracted with. I've got a second and a third person that are kind of going through the exact same thing. The third person, we're actually giving him some assessments and tests right now. He's out of Poland. Okay.
And so, yeah, just kind of trying to slowly build this up over time and bring different people in. So you're not just going out there and trying to find talent that's already on the market. You're actually bringing new talent on the market and you're also helping kind of like
nurture that talent and provide guidance like you mentioned like here build this project here take this course and so uh you're kind of giving them like a curriculum that you think is best suited to whatever the work that you have on hand that needs to be done because you have kind of like this lead time in terms of your client's going to want this and at this point you find somebody you're like i think this person can do it like i think they have kind of the the
the raw input that's necessary in terms of like the curiosity, the problem solving skills. They have enough exposure to, you know, programming and, you know, software development concepts that I think that within whatever period of time that you have available,
You will be able to get them up to speed. And you actually put your money where your mouth is. My understanding is it does cost some money to pay people essentially to learn and give them kind of like a paid internship almost or a paid apprenticeship if you want to call it that.
Yeah, yeah. It's probably three months of full-time pay that I'm not actually billing a client for that time, right? And then in addition, I also – we use pull requests for everything. It's all of our developments through GitHub. So every piece of code that comes through, I end up reviewing it before we merge it for the client because they're a real business. They want –
high uptime, right? Um, the site goes down, people can't purchase things, they're losing money. So they need, they need to have good uptime on everything. Um, and part of that's the tools we choose. Um, we're using like a, a durable job system. I think we're, we're using temporal for a couple of different things. Um, and then part of it is just the, the code review aspects too. And it's gotten easier and easier for me working with the contractors over time because, you know, when I know what a certain change is going to entail, I can just kind of say, okay,
I just kind of give it a quick glance and say, yep, that looks basically about what I would expect and I can just merge it. And then, you know, for some of the newer features, you know, I was, I was up till midnight or 1am last night working on something for a new feature that was kind of out of the, out of the general realm. We were, we were trying to get a, we're trying to get a Postgres database dump built into a digital ocean app platform worker.
And it turns out it's not possible, but I had to prove it wasn't possible. And that took many hours first. So anyway, at least not in the way that we were trying to do it. So yeah.
Yeah. Well, it sounds like, uh, between your consultancy work, I just, the, uh, Adam, what, what is the name of it again? It's at a Tomic. So my LLC is tiger. Yeah. Tiger LLC at atomic ink and then sell point systems. Okay. And, and sell points your full-time, uh, employer. Uh, tiger is your own personal consultancy and then your co-owner of at a tonic. And I just wanted to share this word cause I didn't know what it meant. I was like, what does that mean? So, uh,
Uh, adduction is where something doesn't get like absorbed or adsorbed. I don't know the term. I'm not a physicist, but essentially you can like, like an atom maybe resting on a surface and not be like absorbed into it like a fluid. And so that atom sitting on top of that surface is kind of like poking out, essentially protruding at like an atomic level. So it was a very minor protrusion, but essentially that's what adatonic means. It means kind of like adhered to or stuck to something else. Yep.
My chemistry background that I almost went down that track coming back here at the last moment, right? Yeah. Very cool. So...
You got to open the door. I want to see what's behind there. All right. We'll see, but we can't, we might have to cut it. I don't know what, what, what the camera will be able to see back there. Okay. All right. So I will narrate. Uh, we can't, he's not, Ty's not near the mic, so he's not going to be able to say what he's doing here. Um, but essentially he's gotten up out of the seat and I want to remind you, we don't edit this podcast. This is unedited. So I'm just feeling dead air here. I'm not going to attempt to tell a joke or anything. Uh,
Looks like he's opening the door. The door is opening. It's a big wooden door. I can hear things being moved around. The door is opening. Whoa! Okay, so he's moving his chair. Wow. So I see a bunch of racks, and I see a lot of Cat5 cable, it looks like.
Yeah, the screen is not capturing everything. There's another two or three racks above the border of the screen here. But imagine if you watched Die Hard with a Vengeance, like the bomb scene where they're defusing the bomb. That's basically what it looks like back there. Just a whole bunch of cabling and a whole bunch of small devices linked together. But that was very cool. Thank you for opening that door, Ty. Everybody who's curious about what's behind that door. You can't show somebody the door and not open it, right? Yeah.
Awesome, man. Well, it's been an absolute blast learning from you, talking for you for the past two hours. I learned a tremendous amount about the various trade-offs associated with those different programming languages. Go, Rust, TypeScript. I learned a lot about the Linux kernel that I didn't know in terms of just how, I guess, tight it is in terms of...
like the extreme requirements that people have for code that gets submitted. And also at the same time, I didn't realize that like device drivers were just dropping huge blobs of C code from the air and then they were having to get merged in. And then it's kind of like, yeah, like the distribution of, I guess, accountability for different things in that world. But it's so cool to talk to somebody who's in the open source world. Who's like touching on so many of those different things.
What is some parting wisdom that you have for people who've made it this far into this podcast in terms of people who would like to ultimately do what you're doing, doing like the hardcore back end development, working with maybe like underpowered hardware, huge fleets of devices and being able to essentially get as much runtime on as many devices as possible. If that's the metric metric they want to optimize for, what should they do?
This is going to sound silly because it sounds like I'm just copying the meme, but seriously, use Arch Linux. For real.
Um, and don't go through the new arch install script installer. Like it's really important to understand how all of your system works. Like top down there's, there's plenty of YouTube videos out there now, tons of educational material on how Linux works. Um, I've built a Linux from scratch system. It's everything sounds much harder and much more complicated than it really is there. It's,
Linux is not hard. I've got working audio on Linux. This whole podcast has been recording on working audio drivers on Linux. Wow. Um, it's, uh, audio, audio and video, right? Um,
I think out of everything that I've done, the fact that I'm running a daily driver computer that breaks on me once every few months is actually what's driven me to figure out why things break and figure out the real fix as opposed to a band-aid fix on top of something. So the number one thing is, and Arch Linux is kind of a joke, but like,
dig into how your computer actually works. Figure out what's actually going on either when you write code or when you click a button, like everything from writing code to clicking a button, figure out what, what you're doing is actually doing on the CPU, what it's doing in memory and what it's doing on your file system. That's, I think the most important thing to understand things at a higher level is understanding them also at a lower level.
Awesome. What a great note to end on. I'll just mention that we do have an Arch Linux handbook that was published by Farhan Chowdhury, who's a prolific free code game contributor. You can check that out if you're just looking for a good Arch Linux starting point. So just grab some random PC that you haven't used in a while and install Arch on it. I mean, is that like...
Yeah. And even if you don't use Arch Linux, the Arch wiki is still one of the best sources of information. The Arch wiki, and I've heard the same of the Gen 2 wiki, but the Arch wiki will almost always have like an answer or someone else who's had that same problem that you have, even if you're using Ubuntu or Fedora or something like that.
The ArchWiki is really, really good. Awesome. Well, everybody out there, I hope you have a blast learning more about Linux, learning more about the actual hardware and the various layers of abstraction underneath all the code that most people are writing. And I urge you to also follow Ty's advice there. Ty, it's been such a pleasure talking with you, learning from you. Really appreciate you coming on, man. Yeah, thank you. I appreciate the time.
Um, anyone who's interested in, in who I am or what I have to say, I've got my blog at blog.tiger.com. You can also just find it on my, my website. Um, I've got a GitHub, which is at tiger T A I G R R. Um, and on Twitter, I, or X, I am warp tux, uh, spelled W A R P T U X. Awesome. Yep. And tux being, of course, uh, the Linux tux. Yes. Yeah. Very cool. Well, everybody tuning in until next week.
Happy coding.