Hello, and welcome to Elucidations, an unexpected philosophy podcast. I'm Matt Teichman, and with me today is Gabriela Gonzalez, creator of the Dahl Programmable Configuration Language, longtime evangelist for the Haskell programming language, author of the Haskell for All blog, and engineering manager at Arista Networks.
And she is here to discuss the intersection of algebra and programming. Gabriela Gonzalez, welcome. Thank you very much for having me on your podcast. Absolute pleasure to have you. So I think that when a lot of people hear the word algebra, the thing they think of is like high school algebra. Like somebody gives you, the math teacher gives you a problem, two times X equals 16, now solve for X.
Whereas people who've gone off to study a little bit more math in college often come across this thing called abstract algebra, which is like somehow related to the high school algebra, but it's a little more general. So what exactly is abstract algebra? So typically when we think about algebra, we think numbers, you know, maybe in simple arithmetic. The idea behind abstract algebra is that you take these algebraic operations, but they work on things that are not numbers.
So for example, we're going to be talking about how you can add things like code, for example, or programs, not just simple numbers. That's what makes it abstract. The fact that it's no longer on numbers anymore. Abstract algebra is a bit more general than that. That's kind of a first order approximation of what's going on, but I feel like it's a useful way to get started in thinking about it. So it's like you can add and multiply other stuff besides just numbers in some sense that we can actually make precise. Mm-hmm. Yep.
Well, that's crazy. So what would, I don't know, I have two cups of water and I pour one into the other. Is that like adding? I'm like, yeah, what would be an example of adding something that's not two numbers? So one example I like to give is a recipe. So for example, you can imagine that
Let's take plus and multiplication. So in a recipe, imagine that times is that you specify that you want to use more than one ingredient. And typically when you read a recipe, you'll say like, okay, the ingredients are A, B, and C. So you can imagine that you could represent that algebraically as A times B times C.
And then often in recipes, you can substitute one ingredient for another ingredient. For example, maybe you substitute, I don't know, I'm bad at this, like honey with sugar. I'm making that up. Yeah, or agave or some other sweetener. Exactly, yeah. And so the way you can represent that is using plus. So the idea is that A plus B means that you can use either ingredient A or you can use ingredient B.
And if you think about it, many of our algebraic intuitions are correct if we define recipes using that convention. So let me give you an example. So one of the arithmetic rules that you've probably learned in school is the concept of distributivity, where you say that if I take A and I multiply it by B plus C, that's the same thing as A times B plus A times C.
And if you think about that from like a recipe standpoint, it's saying, okay, if I need ingredient A and I need ingredient B or C, that's the same thing as saying I need either ingredient A and ingredient B or I need ingredient A and ingredient C. So that algebraic intuition still holds when talking about recipes, even though we're not really talking about numbers anymore. Okay, right. So at first I was a little bit thrown.
by the idea that times means and, and plus means or, because, I don't know, I'm just used to thinking of times as the number thing. But it seems like what we're getting at here is maybe like part of the essence of being times and part of the essence of being plus is for this distributivity law to apply. That, you know, if you try to strip what does it mean for something to be a times operation versus, and what does it mean for something to be a plus operation down to the bare essentials, one of the things maybe that you'd get is this distributivity property.
So for me, actually, my first encounter with this idea of algebra working on things other than numbers was not abstract algebra, actually. It was linear algebra. And I think it was also vector algebra too. One of the two in some order. And I remember the instructor was going through a great deal of effort to explain, I think it was vector spaces actually,
I don't remember all the details, truthfully, but they spent a lot of time saying, "Okay, so you know, you can imagine that in these vector spaces, that if you add two vectors, that addition is commutative and it's associative and it also has some identities." And I was almost bored or underwhelmed by the whole presentation of these like very simple arithmetic rules.
And it was only later, in retrospect, that I realized that whole underwhelmingness or simplicity was actually the point, right? So like, mathematics is not really supposed to be complicated. The idea is that we want to be able to take complex concepts and we want to be able to reduce them to simpler concepts because we already have an intuition for things like plus and multiplication, at least for numbers.
And if we can reuse that intuition for more complex things, like vector spaces, or for matrices in linear algebra, or for recipes as the example I just gave, then that allows us to amplify our intuition when thinking about these things instead of getting bogged down in lower level details. So if a recipe says you need bananas and either honey or agave in a way that's synonymous with saying
you need to have bananas and agave or bananas and honey. That's correct. And you can generalize this. So for example, instead of just thinking about the ingredient list as satisfying algebraic properties, you can actually imagine that the recipe itself, the actual sequence of steps in the recipe, can actually be viewed as algebraic in the same way too.
So for example, you can imagine that A times B means first do A and then do B. Whereas A plus B can be viewed as saying you can either do A or you can do B. Okay, interesting. So now we've shifted which aspect of the recipe we're looking at.
And we've noted something else in the recipe also has this plus time structure or this like or and structure. So it seems like ands generally go along with times or seems to go along with plus in these analogies. Yeah. So, for example, Boolean algebra is kind of a simple version of this. So, I mean, you can actually do it both ways. But typically the convention in Boolean algebra is that times means and and plus means or and
and again the same laws hold so you can imagine that A and B or C is the same thing as A and B or A and C. So Boolean Algebra sort of like crystallizes the core essence of what's going on here too. So you and I, both big fans of computers, and I have to say this
of times like operations in a recipe and plus like operations in a recipe definitely reminds me of writing a computer program because a lot of what you're doing when you're writing a computer program is you're telling the computer first do this and then do that and then do one of these things maybe under certain conditions and then do this and then do this. It's actually a lot like writing a recipe. So would that be another type of example like this? Yeah, so all programming languages will have some notion of
do this, followed by do that. It's not quite as common, actually it's not very common at all for a programming language to be able to say "do this" or "do that". There actually is one example I can think of from my personal favorite programming language which is Haskell. So Haskell is a sort of a domain-specific language called Software Transactional Memory which lets you do exactly this. It lets you specify
as the name suggests, transactions, which is basically our atomic operations on mutable variables. But very often these atomic operations, they can compete and what you can basically say is you can have this notion of OR where I say I have two transactions I would like to run and sometimes transactions can fail for whatever reason.
For example, a transaction might say, "I have this precondition. Perhaps some variable needs to be greater than zero, or some Boolean value needs to be false, and this transaction cannot proceed until that condition is met." And so what you can do is you can actually add two transactions and say, "Try transaction A first," and if that transaction is not able to make progress, perhaps due to a precondition, then instead fall back on transaction B,
So if you imagine that sequencing is multiplication and alternation between transactions is plus, then it still observes the same algebraic rule where you can say, "Okay, so if I do transaction A times transactions B plus C, in other words, if I have transaction A followed by transactions B or transaction C, then that should give me the same result as
try to run transaction A followed by transaction B, or try to run transaction A followed by transaction C. You get the same result either way. It satisfies that same distributivity rule. So would it be fair to say that a little bit fleshed out example of that would be, first, try to run part one on this machine in the network. And then if that machine in the network isn't available, try to run it on another machine. And that would be like...
the or in its context of like writing a distributed software application. Yeah, that is very similar in spirit. The idea is that the existence or availability of the machine is acting kind of like a precondition. And the plus here is saying, check to see if this program's precondition is satisfied. And if not, then fall back on the second program, which may run on a different machine as your example. What would be another example of something that has this sort of plus time structure in computer programming?
I think a very simple example that many programmers will be able to relate to will be regular expressions, or regexes for short. So in a regex, it's typically where you have a language that specifies a pattern in a string. And so you use regexes to match against strings using this pattern.
And in the simplest case, the regular expression will tell you if the pattern matches or does not. So it'd be like if I wanted to write code to recognize this is an email address because it has the shape of an email address or this is a phone number. This has the shape of a phone number. I would use a regular expression. Yeah. Also in cybersecurity, you will often see regular expressions come up. So for example, they might look for some sort of a signature to detect if something is malware, like a known bad string.
And so regular expressions, very often, they have a few core primitive operations. So first you can say the simplest regular expression string will be something like "a, b, c." And so that would be like first match the character "a," then match the character "b," then match the character "c." And you have a notion of "or" in regular expressions. So you can say "a bar b." That's the same thing as saying "match a or match b."
So the idea is that sequencing in regular expressions plays a role very similar to multiplication, and bar, or alternation, plays a role very similar to addition. In fact, there are other regular expressions operations you can do too. So for example, there's a notion of a clean star, which says, "Match this zero or more times," or plus, which means, "Match this one or more times." And those actually also have algebraic interpretations too, but I won't go into those right now.
So even though the regex language doesn't use the plus or multiplication symbols, morally it is basically a form of algebra. That's cool. It really seems like a lot of stuff. More than meets the eye has this structure.
So was this one of those cases where we've come up with a mathematical abstraction and the mathematical abstraction finds this unity amongst what appear to be very disparate things? Or is it a case where actually if we squint a little bit, we can see that in fact ingredients are a lot more like regular expressions than we thought? Yeah. So as we said, like,
Many of these examples are very similar in spirit, so you could imagine that ingredient list is kind of like a regular expression where you're saying, "Okay, I'm matching this ingredient and this ingredient," or "I'm matching this ingredient or this other ingredient."
So is it matching what I do to the recipe? Is that what would be the analogy? So it could be an analogy either to the ingredient list or to the actual sequence of steps too. So that's kind of the neat thing about mathematics is that once you start thinking in terms of these more abstract concepts, you start to see all these sorts of connections between various application domains. So we can see that a recipe is kind of like a regex, which is kind of like a Boolean.
So mathematics is very good for cross-pollination of ideas across various domains because then you can leverage your intuitions in one domain and apply that for reasoning about other domains. Wow. So if I come to a great insight about ingredients that just has to do with the plussiness and timesiness of them, I can maybe carry that over into whatever I'm doing with regular expressions or...
distributed software or whatever the other domain is. So let me give you an example of why it can be helpful to find connections between various mathematical domains. So for example, in multiplication, whenever you exponentiate something, so in other words, raise some value to some power,
there is a fast way of doing it that doesn't involve just taking the number and multiplying itself over and over again. Actually, a much more efficient way is to take the number and to keep squaring it, and that leads to a much more efficient algorithm for exponentiating something. But that's cool because if you can exponentiate things that are not numbers, then you can actually reuse that same algorithm and it still works and produces the same speedup.
So for example, let's say you have a way of compiling regular expressions to finite state automatons. And regular expressions have a notion of multiplication. In fact, they also have a notion of exponentiation. So if you continue this analogy, if you say in a regular expression, "I want to match some expression n times," that's analogous to taking that expression and raising it to the nth power. And so whenever you're trying to compile or interpret that regular expression,
you can interpret that exponentiated regular expression using that same fast exponentiation trick. And it still works because that exponentiation trick does not actually care whether the underlying thing is a number or not. All it cares is that it has some notion of a multiplication, which regular expressions do. And the reason the algorithm works is because it works against this abstract notion of multiplication rather than working against numbers specifically.
Okay, cool. So we're really starting to see the payoff then of going abstract. If we try to distill plus and times to their bare essence without assuming anything about whether we're working with numbers or like one of something or two of something or any of that aspect of the interpretation of times and plus, but we really just distill it down to like these are commutative operations, they're distributive operations, they're associative operations, stuff like that. If we distill it down to stuff like that,
Like the fewer the assumptions you made about a trick to speed up an algorithm, the better it's going to generalize. Yeah. And so in abstract algebra, they have various different families of abstract interfaces that you can program against, quote unquote. So let's start with a notion of a semigroup. So a semigroup is imagine you have just plus and you also have one rule, which is that
the + operation needs to be associative, meaning that if I say "add x and y" and then "add z", that should give me the same result as if I were to have instead added x with in parentheses y+c. So, and then you can take that semigroup interface and you can add some additional features to it to turn it into a monoid. So a monoid is the same thing as a semigroup, but it also has a notion of a zero.
So for example, we mentioned that x + 0 should always equal x, 0 + x should also always equal x, and those are called the identity laws. And so there also are richer interfaces that build on top of semigroup and monoid. So sometimes you can have something which can be a monoid in two separate ways. So we talked for example about numbers. So numbers, you can think of them as a monoid where plus is addition and 0 is 0.
But you could, if you really wanted to, say, "Well, actually, I'm going to make '+' represent multiplication, and make '0' represent 1." And the same rules still hold because multiplication is associative, and 1 is its identity. And so numbers can be viewed as a monoid either via addition or multiplication. Another example of something that can be a monoid in two separate ways would be Boolean values. So one monoid there would be
where plus is "and" and the zero is "true" and the associativity law still holds, the identity laws still hold there. Another modeling on Boolean values is where plus is "or" and zero is "false" and the associativity laws still hold and the identity laws still hold. And so very often when you have some sort of a structure like numbers or Boolean values,
which is a monoid in two different senses, we can actually try to mix them and we say, "Okay, one of these two monoids will make that the plus, and the other one of these monoids will make that multiplication, and then one of these identities, we'll call that one zero, and another identity, we'll call that one." And usually you know which one to pick by which way makes the distributive law hold. So for example, for numbers,
making 0 the 0 and 1 the 1 gives you the correct distributivity law. For Boolean values, it could go either way. So you will find that these arithmetic operations arise very commonly when you have two monoids which overlap on the same type of value. So whenever you have two monoids that overlap in this way, so that they have a 0, a 1, a plus, a times,
and they satisfy the various associativity laws, they satisfy the identity laws, and they satisfy the distributivity law. We call that a semi-ring. And there are other generalizations of this too. So for example, if you research more into this, you can learn about rings and groups and so forth. But for today's conversation, we're going to be covering mostly semi-rings. So we view the natural numbers, the counting numbers,
as a semi-group. Well, we don't have zero yet, which might sound crazy, but actually, I mean, in ancient Greece, they didn't have zero yet. It was invented in India later. So, okay, imagine we just have one, two, three, four, five, et cetera, et cetera. And then the one thing we could do with them is add. Once we incorporate zero into that, we've gone from the structure of a semi-group to the structure of a monoid. A monoid is a semi-group except, that is to say, it has this associative operation defined on it, plus a
but it also has an identity element. Anything plus the identity element is that thing back again. And that, you know, we often, I think, intuitively think of zero as it's like nothing or there's none of it or whatever, like in that kind of interpretation. But really in this context, the only like crucial role that zero is playing is this identity thing. The fact that if we add any number to zero, we get that number back. That's the only like reason we care about zero in this context. Like another example along the same lines would be lists.
So a non-empty list, so a list which must have at least one element, is very analogous to numbers without a zero. So you can imagine that you can take non-empty lists and you can concatenate them and you're guaranteed that the result will still be a non-empty list, but there is no identity element, no zero yet. And then if we change them to be lists, which could be potentially empty, now you can concatenate them, but now you also have the zero value, which would be the empty list
And it's important to stress that technically, even the natural numbers with 0 and even empty lists are still semigroups. Remember, semigroup is an interface. It's sort of the operations that you declare to support. And when you program against an abstract interface, you don't have to use all the operations that are available to you. So anything that works on semigroups will work for both non-empty lists, and it will also work for lists too.
but anything that works on monoids, so an algorithm that programs against this mono interface would only work with lists, but it would not work with non-empty lists because they don't support that abstract interface.
And then the semi-ring thing is kind of like a double monoid where we have two associative operations that we can perform and two different identity elements, each of which corresponds to the operation. So we call the one that corresponds to the plus-like operation zero and the identity element that corresponds to the times-like operation one. And then there's some rules about how they have to sort of play nicely together. So we have this menagerie of different kinds of abstract algebras. We have semi-groups, monoids, and semi-rings, right?
and we introduced this notion of an identity element. What would be the identity elements in the examples we discussed earlier, like with the recipes or with the regular expressions? Yeah, so for the regular expression, the zero would be a regular expression that always fails to match anything. It never succeeds. One would be a regular expression which matches the empty string, typically denoted by epsilon in regular expression notation.
And you can imagine that the, for example, 0 would satisfy the absorption law here. So in the regular expression notation, they don't really have a notation for 0. But you can imagine that if they did, if you wrote a regular expression like 0, A, then by the absorption law, 0 times A should equal 0. So if you try to match something that will always fail, followed by matching A, that itself cannot match because the first part will fail and will never get to the A in the first place.
So just like zero times five is zero, a program saying fail is the same as a program that says look for the character M and then fail. Exactly.
So for ingredient lists, like again, there's not, we don't really have notation for this ingredient list, but you can imagine zero would be an ingredient that is unsatisfiable. So like maybe something that you could never buy. It's priceless. I don't know. And one would be the empty set of ingredients. It's trivially satisfiable. You always have it. So we don't have any notation for that, but conceptually, if we did, you could write it that way. And then what about in the case of,
concurrent and parallel programming. Yeah, so in the software transactional memory example I mentioned earlier, zero would be the transaction that always fails, and one would be the transaction that does nothing and always succeeds. So you mentioned that
if we were to strictly give regular expressions an identity element, a zero and a one, we would be adding something that's not in the language already. And perhaps that raises the question, well, look, we've been writing regular expressions for decades now. So why bother studying their algebraic properties? So I think one of the really important reasons is that
we want to be able to reuse these abstract interfaces. So the regular expression DSL is a decent DSL. DSL is Domain Specific Language. For those keeping score at home. But we still might want to be able to reuse other tools if we can program to this abstract interface. Let me give an example. So in Haskell, there's a sum function. In fact, many programming languages will have a sum function where you give it a list of elements and you will add up those elements.
But remember here, we're trying to generalize the notion of number. So what if we could sum a list of things that are not just numbers? Like what if we could sum a list of regular expressions? So you can imagine then like if regular expressions can implement this abstract interface of addition, multiplication, zero and one, then we could stick zero or more regular expressions in a list and just call sum on that list. And now we'll get a regular expression that will match one of those regular expressions in that list.
In the programming world, the reason why we program against abstract interfaces is so that we can reuse the same function in multiple different domains. So the sum function can be used for things that are not numbers, for example. And in math, there's also a similar notion of reuse too. So the idea is that if you can prove something against an abstract interface, then that proof holds for anything that supports that same abstract interface. Right, so...
Lots of programming languages come with a sum function such that if I give that function the list, one, two, three, it'll add all the numbers in the list and give me back six. But one immediate very cool payoff to having a more generalized notion of plus is that we can also add up, quote unquote, lists of other stuff besides just numbers, anything else that has the same structure.
Mm-hmm.
is that although computer programming tends to go through a lot of fads, drawing inspiration from abstract mathematics and the patterns that you use and the code you write can offer us more staying power. So I wonder if you could say something about why that is. Yeah, so...
When I first started programming, I didn't know anything about functional programming. I grew up on QBasic and later C, C++, Java. Oh man, give me some of that Nibbles any day of the week. Love that. And yeah, I remember the Nibbles program. And back then, for me, programming was just a very boring and mundane thing. I remember very early on when I was programming that...
I would try to solve some problem and it was never clear to me what was the right way to solve it because there were always, again, lots of different fads. There was object-oriented programming, there was procedural programming, and even within those subdomains there was all sorts of non-trivial design decisions for how to structure a problem. And I kind of just ran adrift trying to figure out what was the fad of the day that I should be following and it seemed all kind of very ad hoc and arbitrary.
And I just kind of wanted something that was just a little more timeless, a little more self-obvious, self-evident. And the first time I got a taste of that was when I learned Haskell, or more generally functional programming. That was the very first thing that showed me the intersection between math and computer science.
And once I started seeing all these mathematical patterns in my code, that to me satisfied that self-evident, self-obvious criteria. I thought this is going to stay, this has lasting power, and everything else is just window dressing on top of that. I feel like my experience is somewhat similar, perhaps partially because it took so long for there to be hardware available.
where you really could use languages like Haskell, you know, a lot of the stuff was explored sort of as pure math. One of the cool payoffs of that is that whenever something is made production ready, made, you know, ready for you to start writing software with it and putting it out there in the world, it's been really rigorously mathematically studied for a while before it's put out there. It's kind of cool writing programs that make your computer do stuff without
in a framework that has also been thoroughly explored as pure math. Yeah, I think one thing I would like to stress is it's not just having a connection to mathematics because you can take almost anything and mathematize it if you sprinkle enough formality on top of it.
Enough Greek letters. Yeah. I did this a lot when I was a kid. I was very into math. I would literally spend my free time just coming up with random math puzzles for various real-life phenomena around me just to keep myself busy before I discovered video games, basically. Yeah.
And... I see a puzzle book coming up. And so it's not just there is a mathematical connection, it's that there is a simple mathematical connection. So for example, if we can find a connection between code and simple algebra, that's better because simple algebra is something that is taught to students at a very young age and so they can reuse their intuition there.
And so what I found was that what I really liked about functional programming was not so much that it was just mathematical, but because the mathematics that it connected to was very simple. I would like to revisit the example I gave earlier about when I first learned about vector spaces, and the instructor was going through the basics very pedantically, very slowly, and I felt underwhelmed.
And when you first stumble across functional programming for the first time, it will seem underwhelming because they'll be teaching you these very simple and boring things like how to create a record, how to create a sum type,
Lists functions who a constant function it always returns the same thing big whoop and you're thinking so what right so like here all these other languages that are just throwing all these sorts of very flashy abstractions at me like actors or Distributed programming and functional programming is throwing me at me all these very boring concepts. I
And then the real breakthrough you get when you're learning functional programming is realizing that the boring concepts are actually all you need and everything else is just assembled from those simple mathematical primitives. So we've been talking about algebraic concepts. So in functional programming, there's another notion of multiplication and addition. So a record type can be viewed as the product of zero or more types. So that's the notion of multiplication, but now it's at the type level rather than at the term level.
Similarly in functional programming there's a notion of a sum type, which again is a type-level notion of addition. And then there's also a notion of a zero, so there's an empty or a void type, which is uninhabited. And there's a notion of one, which in Haskell at least they call the unit type, which is kind of like an empty tuple or an empty record which has no fields. And they behave kind of similar to the algebraic operations that we've been discussing before.
Another example is that a function type is similar to exponentiation. So a function from a to b can be viewed as b raised to the eighth power. And so the key insight of functional programming is that these simple algebraic types are really all you need for programming. Everything else is just window dressing on top of those core foundational abstractions.
Yeah, and one thing I found from this is that it's a little hard to explain to somebody who hasn't done it before, but it's kind of amazing the way you can actually build up the really fancy interfaces, like you mentioned the actor model and so forth.
out of nothing but these very simple abstractions, because they're simple abstractions designed to be basic building blocks. And then what you find is that you don't have to have special code that whispers sweet nothings into the compilers to magically make the thing happen in ways that you can't predict or really study or know anything about. You can just do it just using plain vanilla, whatever language you're working in, whether it's Haskell or a similar language.
That's certainly one place where I've experienced this expressive power. Yeah, I think it goes back to the notion of portability and cross-pollination between domains. I think one thing that many listeners will find is that if they learn functional programming, many of those idioms that they learn will actually translate well to other programming languages because they're foundational languages.
Whereas, for example, if you try to translate object-oriented idioms to a functional programming language, it will not translate as well because they're not as foundational in nature.
And so a lot about math and math-adjacent fields like functional programming is about this notion of cross-pollination. We want to be able to take something that we learned in one domain and we want to be able to reuse that or transfer that to another domain. That's why we care a lot about these foundational reusable abstract interfaces.
And abstract algebra, like we've been talking about before, is just one example of that, where we can take algebraic connections we make in one area, and we can find algebraic connections in other areas too. So another fascinating argument you've made is that abstract algebra is sort of a more natural fit for human reasoning than maybe a lot of other areas of abstract math. I'm not sure exactly what you might have had in mind there, maybe like topology or analysis or something. I don't know. But how does that argument go?
So the key phrase here is "equational reasoning". So in Equational Reasoning, it's the idea that you understand a program's behavior by simple substitution. So typically in most programming languages, the way you understand what the program does is you kind of simulate the program as some subroutine and you keep track of some state in some site columns.
In a functional programming language, the way you quote "simulate" a language is by simply substituting values with the things that they refer to. That is called Equational Reasoning. Thinking about things in terms of these abstract interfaces tends to make this Equational Reasoning scale much better to larger examples. Because if you ever do Equational Reasoning the first time, it is very tedious. It is quite a chore, truthfully. Most people don't do it, they just kind of hand wave it.
But if you can think about things in terms of abstract operations like addition and multiplication which are equipped with laws like the associativity law, the identity laws, the distributivity law, the zero law, then those laws are very useful for short-circuiting the reasoning process. And you can often use them to simplify very gnarly expressions.
It's kind of similar to the symbolic reasoning that you learn in school for algebraic expressions like polynomials and so forth. You can do the exact same thing with code too, because code can behave kind of like those polynomials, and you can simplify code without knowing what the underlying variables are even referring to. Yeah, and philosophers will maybe be familiar with these concepts under the label referential transparency and referential opacity. An expression is referentially transparent just in case you can
any sub-expression inside of it for another one that refers to the same thing. And kind of the amazing thing is like, yeah, this topic that so much ink has been spilled over in philosophical logic, it turns out it's actually really useful for computer programming because you can take a program that says XYZ in it and if the code is referentially transparent, just substitute different expressions inside of it for other expressions that are equal to the same thing and actually transform the code very easily.
So the key thing here is that you can reason abstractly about code. Let me give you a very simple example. Suppose I have a function whose input is x and whose output is x + 0. If I know that x + 0 is always x, I can just get rid of the +0 even if I don't know what x is. That's an example of what I mean by abstract or symbolic reasoning.
And very often, when we need to prove code is correct, we want to prove that code is correct even when we don't know what the input is. Part of the reason might be that it's not easy or safe or cheap to run that function and test it on live data. So we need to be able to understand what the function does even when the data is not present yet. So that's why we need this sort of symbolic reasoning capability about programs.
Right, exactly. And you never know what a user's going to do with a program. They're always going to do some weird thing you never thought of. And like, if you're not careful about weird things the user's going to do, you're going to get the blue screen of death and all kinds of unpleasantness. We want to like, you know, head that off in advance. Gabriela Gonzalez, thank you so much for joining us. This has been amazing. You're very welcome. Thank you for having me on your podcast. The Elucidations blog has moved. We are now located at elucidations.now.sh. On the blog, you can find our full back catalog of previous episodes.
And if you have any questions, please feel free to reach out on Twitter at atelucidationspod. Thanks again for listening.