We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Tau Language: The Software Synthesis Future (sponsored)

Tau Language: The Software Synthesis Future (sponsored)

2025/3/12
logo of podcast Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

AI Chapters Transcript
Chapters
This chapter explores the inherent limitations of machine learning, focusing on its probabilistic nature and inability to guarantee absolute correctness. It discusses the concept of PAC learning and the challenges posed by complex problems where machine learning's accuracy may drop to random guessing.
  • Machine learning's accuracy is probabilistic, never reaching zero error.
  • For sufficiently complex problems, machine learning performs no better than a coin toss.
  • Overfitting in polynomial interpolation leads to random predictions.
  • Transductive learning offers a situated approach but still faces computational limitations.

Shownotes Transcript

Machine learning is a mathematical miracle. It can learn from examples and give correct answers in high probability for examples it didn't see yet. But this is only up to certain probability. The error will never be zero. Accuracy gets better only up to certain difficulty of problems. From a certain size of your logical question, it will do no better than coin toss.

You're talking about the three curses of machine learning. There's the optimization curse, the statistical curse, and the approximation curse.

You're describing a completely different approach. So rather than doing statistics, we use a logical method to deduce an answer. The Tao language is designed to achieve one and only one goal, which is how to make software to be controlled by its users. Right now they have very little control, if any.

New blockchains come and go every day. TAO is the end game of all blockchains. Every new idea that you want to incorporate into the blockchain, well, just say it in TAO and it will automatically become what you want it to become. You just say it and it will happen. You tell the system, "Make profit for me," it will make profit for you, right? Of course, whenever possible. But the concept of automatic businessman, that's strong stuff and that's only one aspect.

Ohad, welcome to MLST. It's such an honor to have you here. Thank you very much Tim. My pleasure and honor. Thank you. Can you tell me a little bit about your background? I've been a mathematician and software developer my whole life. I've been interested in various fields of mathematics, but in the last 15 years, mainly in the field of AI at the beginning, machine learning and the theoretical foundations of machine learning.

but in the last about 10 years, mainly about the intersection between logic, mathematics and computer science. So what was the aha moment for you when you felt that this logical view of AI was really, really important? Well, I got familiar with the whole idea of mechanized reasoning. Normally mathematicians view mathematical logic as a very boring subject, and I was no exception.

But when I heard how algorithms can kick in and how logic can be mechanized by computers, this became really very interesting. So I've been going through your Twitter post so hard. And in particular, you've been talking about just how unreliable these approaches are. I mean, tell me more. Yeah, so machine learning is a mathematical miracle.

it is very surprising that it can't do what it does, which is to learn from examples and give correct answers in high probability for examples it didn't see yet, for out-of-sample instances. But this is only up to certain probability. There is a certain probability that the error will be low enough. The error will never be zero, and the probability for zero error will never be one,

and that's why it is called PAC learning, probably approximately correct learning. You will probably be approximate, namely low error, but that's it. You will never get guarantees for absolute correctness. And sometimes you need guarantees for absolute correctness. And moreover, sometimes you want to communicate to the machine in a way that is not effectively

captured by examples. Like now we are talking, we are not only giving examples to each other, that would be crazy, and indeed it would be crazy to only give examples and expect to get true intelligence out of it. You need to say your things, not only exemplify. So it is very impressive that machine learning can do what it does by examples,

But we are pretty much at the peak. It will not get much better than that. You need methods that are not machine learning, that will go beyond guessing for examples.

There's always a bit of cognitive dissonance here for me. We've had Noam Chomsky on the show, Gary Marcus, Waleed Sabah, many famous symbolists. And they've always said reasoning sans guarantees is not reasoning. And OpenAI have been boiling the frog for the last few months with the O-series models. What we are seeing, even though these are statistical empirical methods, the accuracy is going up and up and up.

and we can build engineering systems to make them better and better and better, and things just seem to be getting better, and everyone just seems to be accepting it. Why do we need to have these guarantees? Well, accuracy gets better only up to certain difficulty of problems. So from a certain size of your logical question, it will do no better than coin toss.

it will be completely random. So, for example, the Boolean satisfiability problem, which you can imagine a set of constraints of the form: if this guy comes to the party and this guy does not come to the party, then the third guy will come. So, a set of constraints like this. And SAT solvers can solve things like this with more than thousands of variables. But

even O3 or O30 or O300 will not be able, from certain, let's say, hundreds of variables, to give better than random answers. It will be coin toss. It's not that in 90% of such instances that's this kind of problem, it will return the correct answer. From certain point, it will be random. One way to see it is...

If you are familiar with polynomial interpolation, like Lagrange interpolation, you can take any set of points and find a polynomial that fits this set of points. So if you have a thousand points, it will be a polynomial of degree 1000. Will you be able to take, if you have a time series of something and you fit it with a polynomial, to predict the next time point? Well, this will be completely random.

This is captured by theoretical machine learning as infinite VC dimension. If you can fit infinite time points using a polynomial of degree 3, that's very surprising. That's probably something fundamental about the nature of your setting, and then you have much more confidence that you will be able to predict in good probability and low error the next time points.

but if you just fit everything, this is what is called overfitting, you will be random. So intuitively I agree with you, but what we have seen really in the last six months actually on Francois Chollet's ARC challenge is people have been leveraging transduction and that is simply a prediction function where the test data specification is part of the prediction function.

Rather than having this inductive model which memorizes all of the training sets for the purpose of generalizing to any new unseen example, this transductive approach embraces the situated complexity in the world. When a model needs to do a prediction, it incorporates situated knowledge inside the prediction function.

It's a way of kind of embracing the complexity that's out there. So of course, it's still just a new form of empiricist prediction, but it seems to be far more capable of making reasonable predictions in given situation. Forgetting how you reach the prediction function, what is your model, what is your training set, you just look at the

at the end result, and you can see right away the computational limitations of such prediction functions, no matter how you reach them. Yes, but I am a fan of this local form of prediction.

Because if you think about it, machine learning methods as they stand today, they are maximum likelihood estimators. So what they are doing is they are generalizing over these huge statistical distributions. And in every situation, they're always giving you the regression to the mean, they're always giving you the general answer. And there is a lot of optimization, just using local information to get a more specific and more predictive answer for you. If you're

concept class, the class of prediction functions, is all possible polynomials, like we said before that you can fit everything, this class is too big. Its VC dimension is infinite, and there is no surprise that if you do Lagrange interpolation over arbitrary data points, its prediction ability will be zero. It will be completely random. If you take a smaller class, then if you fit your training data,

you are not guaranteed to fit it. But if you fit it, then you have a lot more confidence, not guarantee, that you will fit out of sample. Well, no, this is interesting. You're talking about the three curses of machine learning. There's the optimization curse, the statistical curse, and the approximation curse. So we can choose a smaller hypothesis space. And if it's too coarse, we get approximation error.

And we have the statistical error, of course, which is just, you know, fitting a statistical model to this class of functions. And you're describing a completely different approach. So rather than doing statistics, we use a logical method to deduce an answer. But what's the tradeoff? What do we lose by having a logical methodology instead of a statistical one?

For one thing, the information has to be there. So if all your information is examples, that's not enough. You need to actually say the thing. Machine learning is good for precisely the cases when you don't know how to say the thing. For example, face recognition. No one can put in words how face look like. You can see and tell right away, but you cannot put it in words. For things you cannot define,

For this, machine learning is very good. But for things that you can define, then go ahead and define them. Why go by example? Is it possible though that this is a kind of dichotomization? So many people say machine learning is great for things that I can't write code to do. I can't write code to recognize a face or recognize a cat or recognize a digit. And there are many things in the world that we can describe logically using language.

Isn't that too much of a separation? Aren't there situations where we could actually decompose face recognition into a program or we could combine pattern recognition and logical reasoning into some amalgamated form? Does it make sense to think of them as two completely distinct domains? Before speaking about computers,

There is a big dichotomy between the world and the things that we say about the world. The things that we say about the world are just things that we say. The finger pointing to the moon is not the moon. There is a very big difference. So this dichotomy is already there. Now, as human beings, we are obsessed with living in the world of language.

the conceptual world, the things that we say about the world, the language is even more central to our life than the real physical world. Here, what are we doing right now? We are talking, right? And for us, this is actually doing something, but we are only talking. No, there is nothing wrong with that. That's our human nature. We live in the world of concepts, in the world of language.

Well, I mean, let me extend a little bit. So knowledge is a justified true belief. And as you say, what we need to do is reason about knowledge that we actually know to be real. What's happening with the machine learning world at the moment is we are being possibly gaslighted into thinking that reasoning is more general than what the Greeks thought.

so that there was a famous monty python sketch where they were saying is someone a witch and all these people were saying oh she's a witch because she got out of bed on this side and she's got a pointy nose and she's got funny hair or something like that and this sounds like reasoning because we're constructing all of these different rules and we're composing them together but there's no surface contact with reality you're right so indeed this dichotomy between

The world and things that we say about the world, they are pretty much disconnected. To demonstrate this, we cannot even define what it means to physically exist. If we cannot define that, then how can we define anything else in the real world? Even to exist, we don't know how to put in words. So there is a very serious dichotomy between language and the world.

However, in our human experience, language is as real as the world, if not even more real. So if we were some different kind of creature, like aliens that may be very advanced, but they are just not obsessed with language the way we do, then maybe logic wouldn't be interesting. But we are humans. We do reasoning using language. And that's why it is so relevant for us.

You said that language feels very real to us, but there's a phenomenal component of real and then there's an epistemic component of real. Certainly many of the things that we talk about in language, we know to be true. They are facts. They are things that exist, cities that are placed in a certain country. And then I suppose that there's a spectrum beyond that of things that we just feel to be true.

Yes, so of course as humans we have some connection to the real world. But when you say, "I will go to the shop to buy tomatoes", right? Tomato is only a word. It is not connected to the physical tomato that is there in the shop. It has nothing to do with it. You have no idea what is that physical tomato in the shop. You cannot even point to it until you reach to the shop and you take it.

But until then, you got the idea of tomato, right? And nothing more than that. Also, I say about myself that I'm a mathematician. Well, it's just a world. In reality, I'm just a guy with glasses and no hair. That's what I am in reality. Mathematician is only a word. But that's what matters to us as human beings. We live in the world of language.

Or in which case then there is a potential divergence between words we use and what they mean and how they correspond to the real world. So the concept of a tomato, this is an example of abstraction. It's a category which has formed over possibly a constructivist evolution of language use or maybe it's just a fact of the matter about the universe. I mean how did that concept come about?

So indeed, language can somehow point to the world, but it can never touch it. And to show you it can never touch it, let's go to the question we asked before: Can you define what it means to physically exist? No one can. And if to physically exist you cannot define, you cannot put in language, then you cannot even put all the rest. So all you can do is to approximately point, but never touch.

So what does it mean then to build AI systems when even by your own description there's little surface contact with reality? Even in the most formal descriptions of things? What we are really interested in is not in reality, but in reality as perceived by us. And we perceive stuff largely by language. And this we can implement in a computer. Moreover, how are we going to communicate with a computer? Well, by language.

because we are creatures of language. So it's much less about the world as it is, it is much more about human nature. We want computers to serve us as humans, given our very peculiar human nature. Only potential objection with that is I think of language as a living organism.

that supervenes on us, it rests on us, it's always adapting and changing and so on. And when you extract a formalism, because natural language isn't a formal language, so when we formalize it and put it into an AI system, wouldn't it very quickly diverge from the language that we use and know? It will, and indeed when I say language, I'm not pointing to any specific language. I'm referring to the abstract concept of language

saying things, of expressing things in a symbolic way, like letters, or not only letters. So if you ask me where is this and that street and I tell you it's there, that's also language, right? It's signs that describe something, whether real or completely abstract. So I by no means refer to the specific nature of the specific language. I'm talking about the general idea of abstraction.

There seems to be a relationship between symbol use and abstraction in general. And humans are different from all other animals because we have this declarative labeling ability. So we have this plasticity about how we can point to things and give them labels and share those labels with other people. And of course, the use of those labels gives rise to this abstraction.

And certainly in machine learning models, they don't do symbols and they don't do abstraction without hacking. We can wire them in this recursive loop and we can give them chain of thoughts and we can kind of make them act as if they are doing symbol type stuff. But it's not very natural. It comes very naturally to us, doesn't it?

Yes, so indeed, if you want to do logical reasoning, why go so indirectly and encode it in some linear algebra instead of directly doing logic? Yes, I completely agree with you. The big thing that you're advocating for is verification, being able to have guarantees. I think verification is old news. The real deal is a synthesis. In verification,

You describe a system, and then you describe statements about the system, and then you verify that the system meets those statements. In synthesis, you describe only the statements about the system, only the requirements, and then you automatically generate a system that meets those requirements. One way to look at it, a common practice of programmers is to write tests.

which are programs that test your main program and make sure that it returns the right answer. So to demonstrate a synthesis, imagine that you write only the test, and then the computer automatically synthesizes a program that will make the test pass.

There must be some kind of erroneous behavior that my smartphone might do that we want to guard against it doing. And wouldn't our system of... I mean, we call them tests, but of course we're talking about epistemic tests. We're not talking about statistical tests. But when we describe the behavior of the smartphone, it needs to make sense. I mean, what does it mean for a test to make sense? Yeah, so let's say...

I want to make sure that the phone will never send over the network my passwords. Now, a test for that which you cannot implement in reality the way I'm going to say it is try all possible inputs for the phone, all infinitely many possible inputs, and check if at one point it sends your password over the internet, and then the test fails. How are you going to test something over infinite domain?

Well, by the magic of mathematics, right? Sometimes on mathematics you can prove something about all infinitely many numbers. So if you describe this test and synthesis problem in the right mathematical framework, then you will be able to give guarantees about all infinitely many possible inputs. Devil's advocate on that, though. We see an interesting phenomenon with...

You know, on Instagram and Twitter and so on, there are all of these constraints. So you're not allowed to say, you're not allowed to refer to women's breasts, for example. And what happens is people can get around the filters by using all of these new language terms to describe female breasts.

as an example. Isn't it the same with a password, that I could encode the password in an infinite number of ways, and wouldn't that be able to circumvent the system? That's right, you could, but it was only an example, right? So let's say that you don't send the password in clear text, as it is, right? Even this, how are you going to do that?

But doesn't that just create an infinite regress though that we wouldn't be able to describe the uncountably many ways that people could send a password? So I could give you an idea which I'm not saying it's practical but one way is that the password is stored somewhere and you can track

where it is stored, and see how it transforms, and whether it transforms into one-to-one or many-to-one function. And if it is many-to-one, then, okay, how many? And then see how the scrambled password goes over the network, and whether it is possible to recover the password by whatever definition of possibility of recovering that you give to the system.

So yeah, that's one wild way to implement such a thing, but it is not the point of the example. The point of the example is that you want to say something about all possible configurations of the system, all possible inputs, which are basically infinitely many,

And to be able to get a guarantee for that, you will need very specific mathematical frameworks, so it is not going to be a test like normal programmers do. Or let's say this way: it could be a test like normal programmers do, but that runs to infinity. That tests all possible inputs, so it's gonna run forever.

And then you are going to ask the halting problem for this specific test. Will it ever find a bug? So you will need to write this test in a language in which the halting problem will be decidable. Yeah, the only thing I can't get my head around is, I mean, just coming back to that example before, so someone might come up with a new term for women's

So it might be calcium cannons or something, you know, so you can invent anything. And any reasonable language user would be able to perform inference in that situation and say, oh, that's what they mean. But wouldn't we have a similar issue with

I can't write code to do facial recognition because people's faces are changing all the time. There's always an exception that makes the rule. So what we're doing is we're using a logical framework to describe an amorphous living thing and it needs to adapt continuously. Yes, well, so there are three moving parts here. There is the computer world, there is the human world and there is the physical world.

I know how to treat the computer world. How to treat the human world or the physical world, that's beyond me. I can give you guarantees about what the computer is going to do. So for example, you want to start your own bank and you need a software for your bank and you want to say no balance can be below zero unless it is authorized, right? You really want this, you really need this, right? How are you going to do it? Are you going to ask

O3 to give you your bank software? Let's be real. Even the biggest LLM proponents will never imagine doing such a thing, right? You really cannot trust this. So what are you going to do? You're going to do traditional programming, normal programming? Well, that's what people really do. But then how can you make sure that the balance is really never negative? Well, you will need to go over the whole places in the code where

balance is updated implicitly or explicitly, and then make sure that it is not negative without authorization. But in the Synthesis for Requirements way, you simply just put this sentence: "balance is greater or equal than zero".

And that's it. That makes sense. I mean, isn't there always an exception, though? So, you know, let's say next Friday, there's this really interesting case where a banker from Zurich, he has a special arrangement with the bank. So he's allowed to go zero under these conditions, but no one else is allowed to. And I can imagine a world where we can decompose it into pockets of regularity.

where there are situations that can be quite rigidly defined, and then the rest of it is just chaos. So I can see specific situations where we could build systems like this, but it seems like sometimes we need to have the flexibility as well. Yes, so now you are going a step forward from writing software into how software changes. And that's of course also very important, no less important.

And indeed, the synthesis framework that we work on on the TAU project heavily involves how software is being updated. Yes, maybe we should get to that in a little while. But coming back to a few interesting questions here. So you haven't been influenced by Godel, just Tarski?

No, also not really by Tarski. I mean, what does it mean to be influenced? No, you know, when I open a book of mathematics, that's something that I do regularly, right? Of course, I'm influenced from the things written there, right? Would it be fair to say that you think Tarski is one of the best logicians of the 20th century? Everyone will agree to that statement, yes. In which case, why would you not say you are influenced by him? Well, I cannot say that...

He changed my course of action. It did turn out that our course, the big Tarski and the small me, had some similarities in the course of actions, but I discovered this only in retrospect. Right. So your journey into this world, was it very much self-guided? And then afterwards you contextualized it? Yes, yes.

I'm always very original and not always right and not always bright but always very original. Is that a better approach than, you know, first looking at what's out there and being influenced? Do you think it's a better approach just to do it from first principles? Well, I always serve with the literature. I never try to

to do things from first principles if I can find sources that can help me and continue from there. Sometimes there is no choice. Like when I needed to have a language that can refer to other sentences in the same language, there is absolutely nothing out there. I had to do it from scratch. So you've said also that pure logic-based AI is essential for safety. What gives you the conviction to say that?

That's not exactly what I said. I did say that a language that can refer to sentences in the same language is essential for safe AI. That's the more key point, right? Because also common logical AI, common logical framework, which are not machine learning, also they cannot do

safe AI without this component. That makes sense. So there is a specific form of logical AI in which a language can refer to its own sentences. Yes, and I'm the only one to discover this. Yes. So that specific form of logical AI is essential for AI safety? Yes, it's specific in logic in general. There is no logical language that can do it. Yes. This is achieved by abstracting sentences into Boolean algebras.

I think the core of what we're talking about is in the 1980s there was this notion of the expert systems knowledge engineering bottleneck and the folks back then that they were building these systems and they were describing the world using a series of rules and logical language and so on and the problem is just like it's very difficult using code to do facial recognition

These folks started building these ontological frameworks to describe domain-based systems in the enterprise. And they found that a form of brittleness emerged, that they always needed to put more and more and more special cases in. How is your system different from that? That's very simple. I have no intention to describe the whole real world. I have intention to make computers do what you want them to do.

To accurately describe the world, well, that's too much and it's also not very necessary. We want to describe the things that we are interested at to achieve our human goals.

Okay, so does that mean then that there is a class of domain-based systems like safely flying a plane that we could describe using a logical framework, but there might be other classes of things like, for example, ensuring that people don't post racist remarks on a social media site. That might be beyond the capability. Where do you draw the line?

Programming languages can express anything, anything that computer can do. Well, and then you could ask me: if they can do anything, then please write me a program that detects racist statements in social media. Because I just said that it can do anything. Well, if I promised anything, then you can come and tell me: okay, please do the anything. Well, that's exactly the dissonance, right?

You can do anything as long as you know what you want. You can put in an accurate description, you can give a definition of what it is that you want. Then you can achieve it. But if you cannot come up with a definition, then maybe choose machine learning. Then you can give examples and you will get a less accurate system, but it's better than nothing. What is the bright line between things which we can describe in language and things that we can't?

I can understand this question in three flavors. The whole field of mathematical logic speaks about which mathematical structures you can define in language. And here you can have a simple countable/uncountable argument, because a language will contain sentences, each sentence is a finite sequence of symbols from finite alphabet, so there are always only countably many sentences,

and mathematical realities go way beyond the countable cardinality. So, of course, not every mathematical structure is definable. In fact, very few of them are. Second flavor is to speak about what is computable. And then we have, indeed, computability theory, complexity theory, and we can speak about what is computable and what is not computable. We know, for example, that the Halting problem

It's not computable, and there are computable numbers. There are of course only countably many computable numbers. But the real numbers are uncountable. In other words, almost all numbers, almost all real numbers are uncomputable. And if you choose a real number at random, then in probability one it will be uncomputable.

And the third flavor is what in the real world we can define in language, and this goes back to what I told you before: if we cannot even define what it means to physically exist, then the answer is nothing. Nothing in the real world you can define. Maybe you can approximate, maybe you can define, maybe you can give a definition that is good enough for another human being to

understand what you mean. Like if you tell someone, get me tomatoes from the shop, they can do that. But it's not a definition of the physical tomato in the shop. Yeah, so you're saying that subject to the constraint of definability, computability and language, there exists a subset of problems which are amenable to this kind of logical description.

Yes, and of the most practical interest is most probably to define what computers should do, to define programs by logical means. So going back to the bank example, you want to say the balance should never be negative,

And you want to just say this: balance greater or equal to zero. And that's all. You don't want to go over the whole control flow of the program and check whether balance can be zero. We just want to put this constraint and be guaranteed that it will be met.

So another thing you've said is that in the last 20 years or so there have been advances in computation and the kinds of things we can do with computers that make this more plausible, make it more realistic. Can you tell me about that? Yes, so the big AI winter in the 70s was all about logical AI. It came from the Light Hill debate

It's available on YouTube, I recommend everyone to watch it. And you can see Mr. Lighthill lecturing and recommending to the British government that because of the complexity of logical reasoning, they should defund AI, and in particular logical AI. And it was on the time that the class of problems called NP-complete problems were established

even the most basic logical questions are at least NP-complete hard, were believed to be absolutely intractable, that even small NP-complete problems will never be able to be solved. There was, up until today, no theoretical proof of that. It was a belief that turned out to be wrong. Now we have an empirical proof, not a mathematical proof,

that even very large SAT problems can be solved very fast. And this, in the last 20-30 years, the field of SAT solvers suddenly started to surprise all researchers, showing that what they believed cannot be done

can be done quite easily. And since then, there is like a Moore law of such solvers. They just become better and better and they continue being so. They still improve. So we have empirical proof that logical reasoning is feasible in practice, in contrast to what was thought before. And that's why the time to invest and

build and research logical AI is right now. Now that we know that we were wrong in abandoning it, it was not because it was not good, not promising, it was just thought to be very difficult. I mean, it was a similar thing with deep learning, that everyone wrote it off in the 1980s, and then suddenly we have all of these GPUs and some algorithmic tweaks and data and so on, and it's become tractable.

So you're talking about constraint satisfaction problems. What are the key improvements that have made them tractable? Well, it's some boring stuff. It's a combination of an algorithm that is called DPLL with another algorithm that is called the CDCL that eventually turned out for reasons that no one knows why to solve SAT very fast. It's basically a bunch of

quite trivial heuristics that no one thought that they are going to solve difficult problems, but it just turned out that they did. Do you see a potential hybrid between deep learning systems and logical-based systems, or are you advocating for purely using logical-based AI for specific scenarios? I advocate for purely using logic, and whenever

Statistical reasoning or machine learning reasoning is required to express it in the logic. After all, if you open a machine learning book, it is written in logic, in something equivalent to logic. Programs that do machine learning are programs. They have their own logic. So I advocate for implementing machine learning inside logic. Sorry, what does it mean to implement machine learning inside logic?

So any machine learning algorithm, you can describe it in logical languages, right? And then you get the benefit that this logical description lives inside a broader logical description. So it's just part of the whole thing that you do. You can augment it with also guessing and machine learning capabilities.

That makes sense. So pragmatically you're saying we can combine the two modalities by almost creating a compiler, if you like. So we describe what the machine learning problem is using the logical language and then we generate or synthesize the machine learning framework but still within the construct of logical AI. Yes. If all you want to do is only machine learning

then do machine learning. But it never comes in a vacuum. So let's imagine our bank software. It is mostly not about machine learning, it is mostly not about guessing, but sometimes you want to do some guessing, then, all right, express this method inside your logic.

Okay, so that means we could use the logical framework as a form of verification that could be used in line with a machine learning algorithm. Logic is a form of description. What you do with the description, whether you verify or synthesize, that's a different question. Now, normally, machine learning does not come in a vacuum. You want to describe some system where machine learning is part of it. It's not all of it.

In logic you can do that, you can describe your whole system including the machine learning part. That makes sense, but what you're saying though is that the logical language is the source of truth. It's very similar to there were these object relational mappers for building databases and what they did is they might say code is the way we describe database models.

So we don't want anyone building databases directly. There are no SQL statements. No one's talking to the database server. We now describe database access code using, let's say, C Sharp or whatever, and we abstract away the database. So you're kind of saying we abstract away all of the machinations of the software. All we need to do is describe it with logical language. Describe the requirements, the what and not the how. You describe the test that...

the software has to pass. You don't need to describe the software itself. Now this to me seems like a galaxy brain idea, which is that we can describe declaratively what the software does. Why is there not an infinite space of possible implementations that could be synthesized from those requirements? There is an infinite space. Normally there will be infinitely many programs that meet your specification. And if you are fine with it,

with the backend eventually choosing one arbitrarily, then that's good. If you are not fine with it, then you have to put more constraints on in your spec. So are you describing a workflow where we create the requirements, we synthesize one of the infinite number of possible implementations, we observe the behavior,

And we already know that the program is correct because the whole point of your system is that it's correct given the requirements. At what point will we realize that we need to make a change? Well, either that you had a bug in your specification, you wanted to specify something but you didn't, or reality changes and you want to adapt to reality.

And I suppose that might happen in one of two ways. We might just realize that our specification has changed or we might see from the behavior that we didn't get the specification right in the first place. Yes, of course. So can you explain in simple terms how it is possible to create a program automatically from a set of requirements? It seems like so many people are trying to do program synthesis.

and the naive way of doing program synthesis is you just have this exhaustive exponential search over the space of programs. I'm guessing that's not what you do. No, because the space is infinite, so of course no exhaustive search can cover all options. So for this you need a mathematical framework where you can give statements about the infinitely many objects in

in your framework quite easily. So let's say an example from a different field of mathematics is if you have a polynomial over the complex numbers, if it's not constant, then you know that it has a zero. That's the fundamental theorem of algebra. You don't need to check all infinitely many complex numbers. You are guaranteed that it has a zero. And a similar thing happens in the mathematical framework of the Tau language.

You say statements about infinitely many sentences, and by relatively simple checks you can get a guarantee that something exists. And you can even get an algorithm of how to obtain one example of this thing. So let's give a very simple example that is not about the Tau language, but a very simplistic example. Suppose you have to output a number, and you said the number has to be bigger than 10.

there are infinitely many numbers bigger than 10, how are you gonna choose? Well, that's not so hard, just because the search space is infinite it doesn't mean it's impossible, right? So of course in the Tau language it's way more sophisticated than that, but that's one example. How did you overcome Tarski's undefinability of truth in defining your language? Yes, so I identified

the real-life necessity of having a language to speak about its own sentences. Beforehand, I don't think anyone identified the practical need of it. They saw it as a philosophical gadget. Tarski proved that in a certain setting it is really impossible, in a certain very broad setting.

But because of the practical necessity of it, I tried by all means to find a way how to do it in a way that is good enough in practice.

And the way I found is to abstract sentences to be only Boolean algebra elements. So we forget about the structure of the sentence, we look at it only as a Boolean algebra element, and then it all works. Maybe we should, just for the audience here, introduce Boolean algebra and also Tarski's undefinability of truth theorem. Yeah, so Tarski's undefinability of truth is if you take the...

language of arithmetic, of natural numbers with addition and multiplication, and of course any language that contains this language, so this language or any bigger language, and you add to the language the truth predicate. Now what's the truth predicate? The truth predicate is something that takes a number, after all it's a theory of numbers, and this number is some encoding of

another sentence in this language, in the language of arithmetic, with a truth predicate. This kind of encoding of sentences is called a "Gedel number". So the truth predicate will get a Gedel number of some sentence and will tell you, true or false, whether this statement is true or not. And Tarski doesn't ask you to implement this truth predicate. He only assumes that it exists in your language. And then from this assumption he was able to recover a contradiction.

So a language like this will be inconsistent. You will be able to prove every statement and its negation. Now for Boolean algebra, a Boolean algebra, there are, I will give two ways to define Boolean algebras. One way is, a Boolean algebra is a bunch of sentences where the operations defined between the sentences are AND OR NOT. That's the only operations.

Another definition is a Boolean algebra is a bunch of sets where the only thing you can do with these sets is union, intersection and complementation. Most importantly, you don't have the membership predicate. You can't look at the elements inside the set. You have only union, intersection, complementation and I forgot, equality. Also in the sentence definition, you also have equality, which would mean semantic equality, whether the

sentences mean the same thing, not necessarily written in the same way. That's a Boolean algebra. So the theory of Boolean algebra will be quantifiers for all and exist, and then and or not an equality. So how do you compute whether two Boolean algebras mean the same thing? You don't. You just take this for granted. So if you want to work with the language of Boolean algebra over some

Boolean algebra of choice, well, you need to implement this Boolean algebra of choice. You need to give me the AND, OR, NOT inequality, and if you give me that, then I can take a sentence in the language of Boolean algebra and tell you whether it's true or false in your Boolean algebra.

So you've defined, I mean, you've invented the TAL language and it overcomes Tarski's undefinability theorem by talking about how it can refer to its own sentences. And you've abstracted sentences into these Boolean algebra elements. Can you talk about some of the fundamental building blocks here? So there's simple Boolean functions, Boolean functions, point-wise revision. How do all of these things operate in the TAL language?

Yes, so the tau language is the standard theory of Boolean algebra with certain extensions. So, first extension is: in this language you can write any Boolean algebra element, because in the standard theory of Boolean algebra the only thing you can refer to directly is the 0 and 1, but in this extended theory of Boolean algebra you can refer to any element. That's first extension.

Second extension is that the Boolean algebra it is talking about is nothing but the Boolean algebra of tau sentences of itself. The third extension is the temporal dimension. So when we speak about Boolean algebra elements, we speak about how they evolve with time. So you can say this statement now compared to the statement five steps ago, and it is not only

Temporally, there is also a distinction between inputs and outputs, and this thinks the temporal dimension and the inputs and outputs are necessary in order to make this language a software specification language. This distinction between inputs and outputs is a very strong property, because in the tau language you can prove that at each point in time, for all input, exists output that meets the specification.

This is not something you can do in other specification languages. You can do that only if your inputs and outputs come from a fixed, finite domain. But here the domain is infinite. The domain is all tau sentences. So that would be another extension. And another extension which you mentioned is pointwise revision, which is about how to update tau programs. If you want to change your specification in a live software,

how to incorporate only the change and keep all the rest intact. Okay, so the first thing is we've created an algebra that can, for any point in time, for any input, guarantee that the output matches the specification. It can, well, it can check, right? So

you have your specification, maybe your specification does not guarantee for all input exists output that meets the specification. Well, we can check your spec, and that's what we mean when we say the spec is satisfiable. It means that for all input exists output at each point of time. If your spec is satisfiable, then we can synthesize a program that meets your spec.

And how do you take that step? How do you synthesize a program from a Boolean algebra? So this is very complex. So at the beginning we had, I knew only how to do interpreter, not a compiler, so I could execute your specification, but I couldn't put on the table a program for you to run, but I could still execute it. And this is also not trivial at all. In fact,

The full synthesis algorithm I finalized only about a month ago. And now I know how to put a whole program on the table. Can you tell me about that? So does your system support... I'm just imagining here, but are you possibly generating some C++ code and then compiling it? So there's almost like a transpilation process. And in principle, it doesn't have to be C++. It could be any Turing complete language code.

That's right. Yes. And have you gone with C++? I mean, what guided your decision? Well, in the company we work with C++. I'm a fan of C++, so it will be in C++. Right now we have implemented the interpreter, which can execute a specification, but we didn't implement yet the synthesis. Is there a thing where...

efficiency isn't being respected. You know, like for example, I train a deep learning model to recognize a face and it works, but it's hideously inefficient. It's just taken some galaxy brain weird, it's generated some weird circuit. Is there a thing here that when you generate a program, it's not necessarily the most parsimonious program? Yes, it's a big thing. It's a big difficulty. And

It will be an ongoing effort. I do know how to identify cases in which things can be executed fast, and moreover, the user, the one who writes the specification, can write it in a form that it will be executed fast. There is also the possibility of using SAT or SMT solvers, although it is not clear at this point that it will make things better.

And there is of course the ongoing optimization effort. But yes, I guess it will come down to mainly identifying the easy cases and synthesize functions that don't need to do logical solving. You do the logical solving beforehand and you output a straightforward function that just takes the input and gives you the output.

Is there a relationship between the complexity of the requirements and the complexity of the generated program? Yes, of course. Not in all cases. Sometimes you can take something very simple and write it in a very complex specification, but if you speak asymptotically,

as the size of the specification grows to infinity and in the worst case will the complexity of the program go to infinity? The answer is of course yes. So there's a huge skill component I suppose to defining the requirements in such a way to collapse as much complexity as possible before you compile the program. Yes, but it is shadowed by

the very strong simplification algorithms that can be implemented. Would the simplification algorithms be implemented at the compiler level or at the requirements level? What I mean by that is you might be able to identify certain patterns of bad design. So I place these requirements into the TAO language and you might rewrite those requirements to be more efficient. But then perhaps they are less

intelligible to the person who wrote the requirements? What's the trade-off there? Yes, so when we normalize or simplify formal statements, they might become less intelligible. Sometimes yes, sometimes no. If you take a very, very big spec and you normalize it into one line, that's very nice, but it will not always be the case. So yes, human readability and explainability

will be an ongoing effort. So I suppose that sketches out a future where we should think of the Tau language as being a kind of interface. So at the interface point it needs to be maximally legible to humans and then there might be some intermediate minification or normalization and then there'd be another step of compilation where it would be optimized again.

That's right, that's how it works, yes. Yes, very interesting. One of the really cool things that you've done is this concept of point-wise revision. And that allows you to kind of change the system while respecting as much of the old specification as possible.

How does that work? So generally the revision problem is unsolvable. There is a whole field called belief revision that speaks about this impossibility. So for example if you say, if your old knowledge base says A implies C or B implies C, and now you want to revise it with the new knowledge that says not C.

So now you have any subset of the following options: either you delete the law A imply C or you delete B imply C or you say not A or you say not B. There are many ways. There is no way to optimally choose one. So revision is really impossible. But in a certain setting it is possible in a very clean and optimal way.

in the setting of the tau language, which is a software specification language, if we focus on what matters. And what matters is what output should the program output now, at each point in time. If you focus on that, then you can perform the optimal revision. You simply write a tau formula that says: if there exists an output that satisfies both the old and new specification,

then choose it. And if such output does not exist, then choose one that satisfies only the new specification. And that's all there is to it. Point-wise revision is one of those things that when you see the answer, it looks very easy. But before you see the answer, it is really very hard. So let's say you have a big program with a lot of moving parts.

with a lot of features, and you want to change only one small thing in specific small component out of many, do you need to write the whole software from scratch? Well, no! You want to write only this change. And indeed, in using pointwise revision, you write only the thing that you want to be new, and the rest will remain intact.

But I suppose one of the advantages of the logical based approach is global consistency, which is this idea that here's the system and this is what it does in all situations. And when you start placing

I'm going to use the word local, you know, which is to say in this time, in this situation, you do this, otherwise you do that. Doesn't that, if anything, create illegibility? It makes it harder for people to understand as a whole what the thing does. Yes, and a partial remedy is the normalization process.

So for example, in pointwise revision, I said if exists an output that satisfies both. Okay, so the keyword here is "exists". That's in logic, this is called a logical quantifier. And in atomless Boolean algebra, we have quantifier elimination. So we know how to make this exist disappear. And that's only one example.

during the normalization process, everything will be crunched together, all redundancies will be removed, so it will be more intelligible. I cannot tell you that it will be fully intelligible. But then there's the thing that you are essentially rewriting the rules. Doesn't that then mean that the humans that created the specification, it's now being transformed into this slightly alien form that they no longer recognize?

Yes, it can happen. It happens all the time in standard programming. And yes, it is something to mitigate. There is no magic solution. No one can mathematically define what it means to be intelligible. If you could define it, then we could implement it, but no one can define it.

Isn't it fascinating though because in normal software programming there's this cognitive interface between the software engineers and what the computer does and we come up with these high level abstractions we might use the mediator pattern or the observer pattern or all of these different design patterns and

Weirdly the way the code actually works on the computer doesn't resemble those abstract patterns very much at all. It's just a cognitive interface. But it would be very strange if my Python interpreter rewrote my code for me and said, "Actually this is the way I'm going to represent your code."

because what that means is my, the provenance, my mental journey of getting to that place has now been scrubbed and I can't then make subsequent steps in that cognitive space. Python interpreter does rewrite the code for you, it just doesn't show it to you, but it does. Oh I see, so you're talking about rewriting the code but still maintaining the rules as the developer created them. Again, it's your choice, right? Now,

The way Python rewrites your code, it's always horrible. You never want it in this way. But the way we rewrite your tau specification, sometimes it's even better than how you wrote it initially. I suppose what I'm getting to with this, it always, I mean, certainly if you think about how a C++ compiler does optimizations to make algorithms run faster, the optimizations always seem very alien to us.

But are you making the argument that optimizations in this language space, they actually seem even better to us than the original thing we came up with? Can be. Not always. Can be. Sometimes. So one example I already gave you, which is quantifier elimination. So all the for all and exists that you will write, you can make them disappear.

How do you see the TAO language being incrementally adopted in existing software development workflows? So let's imagine we're not going to throw everything out and start again. We're going to incrementally adopt it inside systems, you know, subsystems. How might that look like? The TAO language is designed to achieve one and only one goal, which is how to make software to be controlled by its users.

Software is there for one and only one thing, which is the users. How can users control the behavior of the software and change it with time? Right now they have very little control, if any. If you want any software to be controlled by the users, you will need to use the TAU technology. You will have no other choice. You have no other choice.

It all comes to what we spoke about software specification and pointwise revision, that the user can say: "I want this to hold", and then the whole old specification will remain intact, except this part that the user wants to change, which will take place. So this whole thing again was built for only one reason: to make software controlled by the users. And this is something that you want in virtually any software.

And there is no other way, only the Tao way. This makes a lot of sense. So I suppose that there's a bit of an ideological piece here, which is that you're a fan of decentralization and a fan of allowing just the people who use software to control it, essentially. So at the moment, we have these centralized teams of developers and they decide what features to build into software. And you're saying in the future, we'll have software which is

actually written by the users that use the software. Yes, and moreover, it's not only about decentralization. I'm the user. I'm the boss of my computer. I decide what runs on my computer. Why should other people decide for me? I want this power, right? Okay, but how could that work? So one way of doing that is you write your own software that runs on your computer, but...

If we broaden out a little bit, there could be software which you share, you know, you're a user and thousands of other people are users and you collectively improve that software. So you decide as a collective what the best new features are to add to the software. And there's some kind of coherence mechanism, if you like, and new requirements get added over time. How does that work?

That's right, so indeed that's another step which is also very fundamental to the whole TAU project, a software that is collectively being controlled by all of its users, and in particular a peer-to-peer network, a blockchain network that is controlled by all its users. Because what is the alternative?

The alternative is what we already have today: a blockchain network, an economy that is controlled by a small group of developers. How can that have wide enough shoulders to support a real economy? Well, it just doesn't. I'm not saying that the other extreme is the way to go. I'm not saying that it should be controlled by all of its users in a completely equal manner. Maybe not.

Maybe it should be some kind of meritocracy, I don't know. And for this reason, we invested a lot of effort in how to formalize laws of changing the laws, how to be able to change not only the program, but also the way that program is being changed. And only in this way that the governance mechanism

governs also itself. There is a way to change the governance mechanism itself. Only in this way you can have a blockchain network that can reliably hold economy in the wider sense, that lives in society. It's not just a coin in a vacuum. There is a whole society, a whole market, a whole economy around currency.

There's two components to this. The first component is using the blockchain to create a decentralized and coherence or consensus-based approach to writing software. So it's not that Tau uses blockchain, it's that blockchain uses Tau. Tau doesn't need blockchain, but the blockchain needs Tau. The blockchain needs to be redefined over time by the users.

And like any other software, you cannot have a software that is defined in a sound way by its users without the TAO technology. So you're saying use the TAO language to define the blockchain? Yes. Well, so I'm trying to understand that. Okay, so we use the TAO language to define the blockchain. And there's some kind of consensus algorithm which itself could be defined with the TAO language.

And what is the currency? You know, blockchain has a coin, it has a currency. How is that related to everything we've just been discussing? Well, so when we say software controlled by its users in general, right? The software has to do something, right? It's not only controlled by its users, but it has to do something, right? And this thing is also controlled by the users. And in blockchain, so...

That's what it does. It does cryptocurrency in a way that is controlled by the user. So if you want to change how the cryptocurrency behaves, or if you want to make other things, you can do it. So, for example, smart contracts are just one aspect of users controlling the system.

So as a user controlling the blockchain, you say, I want this behavior to take place and it will take place. Okay, but I'm just trying to understand here. So before we were talking about, I'm using the Tau language, I create specifications for software which is synthesized.

And now we're talking about using the Tau language to essentially, as a collective in the blockchain network, decide how the blockchain network is transacted. But is there an intersection between those two? I mean, what are we doing here? Are we designing new ways of controlling blockchains or are we using the Tau language to create new types of software? Create new types of software, in particular, this blockchain software, yes.

One is a special case of the other. Okay, but you're primarily focused on the blockchain flavor of this? So new blockchains come and go every day because they supposedly offer this feature or that feature that was supposedly not offered before. Where is it going to end? There are thousands of blockchains. Where is it going to end? Well, it's going to end with a blockchain that however you want to change it, it will change on the fly.

And that's why I wrote an article on Twitter that TAO is the end game of all blockchains. It is indeed. Every new idea that you want to incorporate into the blockchain, well, just say it in TAO and it will automatically become what you want it to become. So one use case that we speak about is trading knowledge. So you could say...

in the very same way that you say anything about the software, controlling the software as a user. So one of your statements would be: "If someone sends me an answer to this question, and you will have to define what it means to be an answer to a certain question, then I will give them coins." You just say it and it will happen. Or another concept is the automatic businessman.

You just say: "This is the assets that I have, this is the things that I want, this is the deals I'm willing to take part in". And all of this you don't have to say even explicitly, you can say it implicitly, and then you tell the system: "Make profit for me, software that does what the user wants". Of course users are going to tell it: "Make profit for me,

And it will make profit for you, right? Of course, whenever possible. But the concept of automatic businessman, that's strong stuff. And that's only one aspect. Okay, because many of the audience might not know much about blockchain technology in general. So just to make it clear, the technology is, I suppose, the most efficient form of structured financialization. So you can create a market and

I'm sure many folks have used smart contracts before. So you have certain guarantees. You can say, here's a piece of software runs on the blockchain with certain guarantees and it's decentralized. So we have this web of trust. And you're saying with the Tau language, we can actually adapt the blockchain. We can make it do what we want to do while retaining the guarantees that we had before with blockchain technology. Yes. So to contrast

Tau with other blockchains that support smart contracts would be mainly in two ways. So if you can have smart contracts that, let's say, are updatable, you can replace the contract with another contract. It is still not the case that the code of the blockchain itself is a contract. And in Tau, indeed, they are the same level. They are the same thing. Second difference is to be...

In existing smart contract languages are just programming languages. There are not specification languages that can speak about other sentences in the same language. So in the Tao language you could have a contract that says the balance can never be zero. You don't have to go over the whole flow. And you can also say if some other contract implies implies is a logical word implies certain reality then do this and that.

So that would be two major differences between the smart contract aspect of TAU compared to traditional blockchains. So very broadly are you saying it's a type of meta blockchain? Yes, but the TAU is its own meta language because it speaks about its own sentences. TAU is its own meta language.

Now, in meta language, where usually people speak about meta language, the meta language... so there is the meta language and there is the object language. The meta language can do basically anything it wants with object language. But TAU is its own meta language restricted to Boolean algebra. That's the only thing it can do with itself. So yeah, the TAU blockchain is its own meta blockchain.

This is making a lot of sense. So you're saying the reason why we have this proliferation of blockchains is because they are essentially handcrafted. So people decide I'm going to code these smart contracts, they write them with code and there's a kind of brittleness. And you're talking about a blockchain that has not only a meta programming and amorphous capability, but that behavior is controlled by the users.

Yes, that's right. Yes. So that's what you've built? Still not ready, but almost ready. When are you going to roll this out? It's a good question. I don't want to give any promises. I was too optimistic in the past. I thought that it will be already ready by now. Unfortunately, it's not ready yet, but the hard parts are behind us. I'm sure you can appreciate that the hard part is to implement the engine of the TAU language.

and this is ready to a large extent, there is still more work on it, but also the hard part of this hard part is over. So yeah, I don't want to give any specific time estimation, but it's coming. Do you see this as the single blockchain to rule them all, or do you think that there will be many Tau blockchains? I think that, so because it is controlled by the users, the users will also set the incentives, and I expect the users to...

disincentivize forking and incentivize in keeping it all at the same place. Okay, so when you launch this new blockchain people will be able to buy tokens? Yeah, people are able to buy tokens already right now, the Agoras token, AGRS, but when we launch the system the users will be able to control all aspects of the blockchain and the token.

Okay, so people can already buy it. So the blockchain has been launched, but it doesn't have the new software running on it. No, the blockchain is not launched. It is currently a temporary token implemented over Ethereum that will be swapped to the full token when the blockchain is ready. So what people can buy right now is the temporary token that will be swapped.

What do you think about the angle of collective intelligence? So do you think it's possible using a system like TAO when implemented in the blockchain that it might be able to solve problems that individual brains can't solve? So there's a kind of collective work angle if you like to solving big problems. Yes, just like software development. You know the famous quote that what one developer can do in one month, two developers can do in two months.

because development doesn't scale, but there is more brain power. So imagine many thousands, millions of developers in a way that can scale. They just say the requirement and the system crunches it all together. So of course we are going to get a software of scale and quality never seen before. What happens when people just vehemently disagree with each other?

It all depends on the laws of changing the laws, right? Which is yet another thing to be decided over the system. So do you think there could be a social benefit to a system like this? So for example, right now we have echo chambers and people are not really aware of their logical contradictions. Do you think a system like this could actually help us get closer to various truths in the world? What it more directly brings us to achieve is a computer that

does what we want it to do. Computers already do stuff that we want them to do, but not everything and not perfectly. It will take software to the next level of satisfying the users. What does it mean to be a good software? It means to do what the user wants. That's what it means. And to be a better software, it means to do more of what the user wants. What would be the best software? The best software will be a software that the user

tells it what to do and it will do it. So by this definition, which is not so crazy, by this definition, TAO is the best software. It will just do what the user wanted to do. You know, Steve Jobs famously said that people would want faster horses and people don't know what they want. What do you think about that? It is true. So indeed, the definition that I gave is a bit simplistic, but it is already a big deal.

But I suppose maybe we're not giving the users enough credit. So maybe if the users actually had interactive control over the requirements that generated the software, maybe initially they would start building faster horses, but they would learn pretty quickly as Apple learned what would be a better course of action. Indeed. So the collaborative approach of Tau is not only that people put requirements and are combined together,

it is also about a setting of discussion. People having discussion about what software should do. And indeed, one of my old definitions of TAU is, which is still relevant, is TAU is a discussion about TAU. And indeed, on TAU, people discuss what TAU should be like, and TAU becomes the consensus of the discussion. So in a social setting, people discuss what they want to happen. And then the system builds what we call an opinion map,

which is a mapping of all opinions in the discussion, which is a special case of which, which contradicts which, and so on. And the consensus of the discussion becomes the software update. And even what it means to be consensus, that's also definable, that's the laws of changing the laws.

Do you think there'd be a learning curve for users of the system to define requirements using this logical language? Yes, yes, of course. Definitely at the beginning you will have to learn the language. Now it's not such a complicated language, it's quite a minimalistic language, but just because you control the language doesn't mean it's easy. Many programmers control certain programming language,

it's still very, very hard to build software. Studying the language is the easy part. So yes, there will be learning curve and we will try to make it easier over time by using so-called controlled natural language and by better interactive tools that help you to express yourself. So this is going to be an ongoing effort.

It's interesting what you said about language. So you're almost saying that the actual domain understanding is harder than the syntax or the technical understanding of writing in the town language. Yes, just because I speak a little bit English doesn't mean I'm Shakespeare, right? And do you think there might be cul-de-sacs?

So are you worried about the system becoming stuck because people initially thought that this thing would be a good idea and then it transpired that something else would be a good idea but now because of the way the system is designed we can't change it? So indeed, Tau is all about change. It's designed to change. Now you can change it in a way that prevents other changes.

There is nothing to do against that. If I give you the power to change, then you also have the power to avoid change. By the way, you also have the power to never avoid change. This you can also do.

I'm just thinking about, I mean, some religions, for example, they had error correction built into them, which meant that they just stayed the same and didn't really change for millennia. And other religions were far more dynamic and they're splintering and mutating all over the place. So do you think that it's a bit of a lottery that depending on the trajectory of change,

that the system might be quite sclerotic versus if the chips fell down slightly different, it might be a very different outcome. Yes, it's like, suppose, let's imagine that a new country is formed and this country starts with the democracy. And I ask you, what would be the laws of this country in 20, 50, 100 years from now? Well, no one knows, right?

But then what if people said we need to have some kind of a veto? What if the system became so entrenched and it didn't serve the desires of the users anymore but unfortunately it just has become locked in because of previous decisions? It can be, yes. But they have logical AI helping them to have guarantees of what can and cannot happen

They have monetary incentive to keep the system. It is a collective effort. It's not just a mistake of one person, right? This is the risk you should accept when you want things to change. But this risk is mitigated by all these things. I suppose it's quite interesting because in democratic capitalism...

Noam Chomsky famously said in Manufacturing Consent that we have uninformed consumers making irrational purchasing decisions. Now, of course, in Tao, you can say that we actually eliminate the deception, that people can actually see why things are happening in a certain way. But I suppose the main philosophy, though, is that users know what's best for them.

There are people who say we need to have experts and we need to protect people from themselves. What do you think? Well, I would guess that many users will delegate their vote, quote-unquote, there is no voting in Tau, there is no need for voting, but delegate their voice to experts that they trust. And maybe Tau will end up as a form of meritocracy.

Given the learning curve, do you think that we might use something like language models or some syntactic sugar or some way to make it easier for people to program rules into the system? Yes, and indeed one consideration that we have is to use LLMs to translate from natural language to the Tau language.

And yeah, we are not working on it right now, but that's definitely something on our list. What do you think governments will think about this? Do you think they might try to subvert a system which is so democratic that the users are in control? I don't know. Is it something that concerns you? No. There is nothing too scary in this system, right? It's a blockchain that is controlled by the users. Why should it worry anyone?

Well, what would be the existential threat for such a system? I mean, how is it protected from interference from other actors? As it will be deployed, it is not protected by interference of other actors, but the users will have to set rules, right? And we will start with testnet, not with mainnet. So we will start with a blockchain with fake coins that the users will set the rules.

and because it is only testnet, there is no harm in restarting it. So we will let the users play with the rules and over the system come up with some initial rule set, and if they reach a deadlock, we can just push a button and restart it from scratch. This will be a very useful sandbox for the users to decide how the governance should be like, and then this can be implemented over the mainnet.

Well, first of all, do you agree that the philosophy of the system is that right now there is a power asymmetry? So there are folks that take away our agency. And let's just say agency means the ability to control the future or our future. And a system like this is emancipating in the sense that it gives users more agency.

but usually when people get too much power and agency, other people want to take it away again. For one thing, it can be, but then it will be a choice of users to give up their agency. For another thing, there is a whole other dimension to all this, which I guess is much bigger, which is how a program is being collectively defined and updated. You can do it, of course, in traditional coding, but...

It's like Stone Age, right? It's very, very limited and very, very difficult. But when you just say what you want to say, it automatically becomes a computational reality that's a different level. In that you're talking to people who know nothing about the blockchain whatsoever. Why should they be interested in the blockchain? It's the future of economy, right? If you think how traditional economy works with the paper money and printing money and

interest rates and all the politics and all the dirty stuff yeah that's crazy

So just help me understand more about that. So the way, let's say we have the Bank of England and they set the interest rates and they can print new money with quantitative easing and there's all of these different levers and actions going on and we have no control over it. And you're saying that the blockchain is a new type of financialization where there's more transparency, we just understand how it works better? Yes, that's one aspect of it, yes. And what other aspects are there?

So let's say automation, right? If you want some automatic process, if you go to your bank app, which automatic financial processes can you do? Well, very limited, right? But on blockchain, you can do virtually anything. So you mean having custom software that runs in the blockchain that does things that you want it to do? Yes. And that might be...

I don't know, when someone sends me some money, I want to automatically move it somewhere or I want to run this logic program and depending on the output of the program, I might want to notify someone or do something. You want to write a program that pay your taxes automatically, right? It does the whole accounting, the whole calculation, calculate how much need to be transferred to the tax authority and does it automatically, right? Why we are not there? It's 2025.

I can attest to that. So just getting my accountant access to my, you know, we have a VAT system in the UK and the amount of paperwork required and then I had to get him validated and they had to send him an email. He had to send a special code back and just paying my tax is incredibly difficult. They tried to set up a direct debit and then it didn't work and then it bounced and then they fined me and then I've had to appeal the fine and all this kind of stuff. It's a nightmare. So you're saying that by having this

structured interface, I could plug my accountant in there, I could pay my taxes automatically, the whole thing would just run and then we could almost have a marketplace of accountants maybe. So you know I could, I don't like this accountant so I'm going to plug this accountant in and it would just work. Well you don't need accountants at all if it's all automatic right? I mean it's easy to see that the system is in the Stone Age right? As you just described.

In 2025 you expect everything to be automatic, everything instant, everything accurate, but it's not like this. The monetary system is one of many things in life that is a century back, right? If not two centuries back, right? There is so much room for advancement and that's to your question. That's another thing why blockchain is interesting.

Isn't this one of those things where we could automate it, but weirdly we don't? Like, for example, in large corporations, there are advisory committees, there are, you know, gating systems, you can't check code in without release control. And

My bank, for example, every time I pay my staff, they're phoning me up, they're blocking the transaction, they're saying, who is this person? Did they send you an invoice? Do you know them? And it's the same conversation every single time. And maybe they're just so worried that if we automated this system, it would just spiral into chaos and there'd be lots of fraud. Is that the reason why they've put all of these manual checks in? I don't know, but I do know that. Just like you said before, when you asked,

If the users control the blockchain, will it spiral into deadlocks and problems like this? That's exactly what happens in real legal life, right? That there are so many laws. How many new laws pass every year? And how many old laws are deleted every year? Right? It's like this, right? That's already happening in a system without logical AI, right? In a system that

doesn't care about your opinion but in in a way what you're arguing against is bureaucracy so there are so many processes in the business world where you get gatekeepers and are you saying that a big part of that is just because it creates a job for people and people like to interfere and whatnot the only thing i advocate against is illogicalism

I'm saying don't be illogical. And being illogical you find in LLM, you find in social structures, you find everywhere. And to bring logic to our life, we can mechanize it with computers. We can fix, I don't want to say all of this, but a large part of it. Amazing. Well, Arad, thank you so much for joining us today. It's been an honor. Thank you, Tim. My honor and my pleasure. Thank you.