We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Complex Dynamic in Networks

Complex Dynamic in Networks

2025/6/28
logo of podcast Data Skeptic

Data Skeptic

AI Chapters Transcript
Chapters
This introductory chapter sets the stage by discussing the limitations of solely analyzing network structures and the importance of understanding network dynamics. It introduces Baruch Barzel and his work on network dynamics and complex systems, highlighting the need for a unifying theory in network science.
  • Simply analyzing network structure is insufficient; network dynamics are crucial.
  • Baruch Barzel's research focuses on network dynamics from a physicist's perspective.
  • Network science lacks a unifying theory to explain all networks.

Shownotes Transcript

You're listening to Data Skeptic: Graphs and Networks, the podcast exploring how the graph data structure has an impact in science, industry, and elsewhere. Well, welcome to another installment of Data Skeptic: Graphs and Networks. Today we're getting towards the physics end of things, talking about the dynamics of networks. And Asaf, you introduced me to our guest. Can you tell me who we're going to talk about today?

Yes, we're going to talk with Professor Baruch Barzel. His lab is all about network science from a physicist's point of view. Yes, some very interesting things going on. As most physicists do, he gets interested in universality and how we can describe, I guess, all networks under one unifying theory.

Does that appeal to you? You don't necessarily come to network science from physics. Are you interested in a unified theory? Well, I like to unify things, I have to admit. Actually, I think that's what I thought was so fascinating with network science, the fact that you can see the same phenomenas in different data sets. That's really exciting stuff. But what Baruch adds is that

He tries to find these patterns, these phenomenas, not only on static networks that we talked about in our episodes, but also on dynamics on network topology, which is very interesting.

Yeah, so we've gone over small world hypothesis, and I think we'll touch on that in the interview. And a lot of these other situations where it shows us networks have some amount of universality. They follow these standard laws, power law in terms of the connectedness. Everyone has their one giant connected component. But I thought maybe all of that was just kind of

As you say, examples of real-world networks. I had never really, until this interview, thought of it in the context of, like, let's have one overlying theory that governs all networks. It's still not a theory, right? I think we talked about it when I quoted Mark Newman.

A very famous network scientist said that network science still lacks this theory of, you know, a whole theory that unifies the entire field and so on. I think he said it's like physics without the Schrodinger equation.

Baruch does find some interesting stuff, some interesting dynamics on network topology. What I can add is that Baruch Barzel looks at it from...

let's say, from theory, right? From using models and so on. But actually, some of these models you can find on real-world networks. Bau said, spoiler alert, he says that there are some different, you know, some surprising dynamics you can find on network topology. Like the network dynamics are controlled by hubs, right? That hubs...

control the dynamics, they pass the information on the network and the network depends on the hubs. But actually what he finds is that there is this kind of dynamics, but also an opposite kind, a kind where the hubs are like the buffers of information flow in the network. When it comes from Bauch, it's theory, but actually we find it in a real world network, in this case, Facebook.

Facebook published a very interesting paper. It was like seven years ago, I think. What they did was they tried to study viral campaigns on Facebook and they found four kinds, we'll focus on two of them, two kinds of viral campaigns that have different dynamics, although they run on the same topology, right? The topology of Facebook. And by the way, it's a very interesting paper because it

It's the first time you can see they studied influence, influence on a network, but actual influence, like physical influence. And usually till then, people, when they studied social networks, they studied like Twitter and memes and so on.

But what they did, they studied two campaigns. One campaign is a meme campaign, viral memes. And again, it doesn't require much energy. And in this case, they saw the dynamics was ruled by the hubs. The hubs were the ones that spread the information all across the network. But the other viral campaigns they looked at was the challenge campaign.

And specifically, they looked at the ice bucket challenge. I don't know if you remember it. It was a part of an ALS campaign. For a bucket of ice on your head? I don't remember exactly how it helps, but it was a good thing. Sure. Yeah. Well, we talk about it now, right? Seven years after. Actually, it's like 10 years after, I think.

This kind of a challenge wasn't spread by hubs because it required lots of energy to do it. It was spread by using a challenge. You challenge someone specific to do it. Yeah, the ice bucket challenge is a great example, I think, if we consider it as a meme challenge.

And it's undergoing evolution, if you'll humor me, that in order for it to persist, it has to continue on to infect the next person. And maybe not the right way to go through a hub, but if I specifically challenged you and then you've already gone through it, so why wouldn't you challenge the next person? And it can kind of persist in that way. And as Brooke would say, it transfers through the outlier. Well, he didn't use that word, but the edge node, so to speak. Not the hubs, but the signal transmits outliers.

In this case, along the edges, along the outer edges. Right. But what we need to remember is that most of these campaigns won't work. There's, of course, a bias here because what Facebook looked at were the successful campaigns, right? Most of the campaigns weren't successful. Oh, for sure. But on the successful campaign, we can talk about the dynamics, right? Absolutely. Yeah.

It's interesting too that as much as we can say about network structure, statistics describing the page rank or modularity or all these otherwise interesting features, it's not obvious what dynamics will emerge from the underlying network directly, or at least no one's cracked that code to my knowledge.

We don't know it because, well, that's what Baruch is telling us, that we know the topology, but dynamics is a whole different ballgame. Yeah, we almost have to observe the system to know how it evolves, or simulate it, or things along these lines. Go Baruch Barzel. Absolutely, let's jump right into the interview.

My name is Baruch Barzel. I know it's a tongue breaker, so I accept any reasonable mispronunciation of this name. I am a professor of mathematics and physics at Bar-Ilan University in Israel. We also have a startup that emerged from our lab's research, which is called OpMed AI, and it does optimization of medical administration and scheduling. A couple of things to follow up on. Maybe could we start with the Barzel lab?

Could you tell me some details about what's going on there? Our lab is situated in Bar-Ilan University. It's affiliated with the Brain Science Center in Bar-Ilan and with the Mathematics Department. So we do research kind of in both areas. The main focus of the lab is network dynamics, which is what I think we're going to be talking about. Who is in the lab? So we are roughly 10 people in the lab, including myself,

our dedicated lab manager who does everything for us. Her name is Batel. And the rest are students. So they are postdocs, PhD students, master's students. Those are, we typically keep a number of like eight to 10 students at any given point in time in the lab. And with a focus in network dynamics, what does that look like day to day? What kind of research projects are going on? So network dynamics is,

Let me be very technical here. It's the idea of how systems behave. And I think that later on, I'll kind of give more introduction. But when you talk about a complex system, first of all, we have a lot of components that are connected to one another. And in mathematical terms, what connects them to one another are mechanisms of interaction. Technically speaking, those are nonlinear differential equations. So most people in my lab either research the structure of a network

or they research the behavior, the dynamics of a network. So what does our day-to-day look like? Well, we need to solve a lot of nonlinear differential equations. There are two ways to do that, with pencil and paper, with computer, and with data. And depending on the student and their tendencies, some students are very analytical. I'm also kind of an analytical guy, so we like to put a more stronger emphasis on the pencil and paper.

Other students are great at computation. Most of my students are much better programmers than I am. And so they do a lot of numerical simulations and that's the fundamental part of their work.

And then there are the ones that are really talented on working with real-world data. So every project is kind of tailored. It's an artwork that's kind of tailored to what I think is interesting and what are the talents and tendencies that the students brings with them. So working with nonlinear differential equations poses some challenges.

Why isn't there just one simple algorithm to predict the future state? Why do you need these more advanced techniques? So let me tell you the story of how the main theme of our lab's work began. And I think we should start the story at the turn of the century, at the inception of network science. So we're now at 1998, 1999, and there are two interesting discoveries in network science.

People look at networks. They can be ecological networks, animals connected to one another, social networks. Everyone knows social networks. Brain networks, neurons linking physically to one another, genetic networks where genes chemically regulate each other. So all of these kind of networks, people look at them. And I think that the main discovery that led to the birth of network science is the fact that researchers found a lot of commonalities between these very different systems.

We're looking at biological systems, social systems, infrastructure systems like the power grid or the internet. And those comes from very, very different scientific domains. Different people are researching them. Engineers are researching the power system. Biologists are researching...

genes and proteins. And suddenly, at the turn of the century, we realized that when we abstract these systems and just look at the structure, who's connected to whom, what gene regulates what other gene, what person is friends or speaks over the phone with what other person. When we look at the data, not me, but my predecessors, when they looked at the data, they found that despite the fact that these networks come from very, very diverse domains,

They have universal characteristics, recurring statistical features that we find in those different systems. Now, to be clear, I'm not saying that your brain and Facebook have the same network structure. That would be too much, or at least it's not yet there. But I am saying that your brain and Facebook have some similar statistical characteristics when you look at them from a large macro level bird's eye view.

There are two fundamental discoveries which pick my interest. One is the small world phenomenon. You take networks that can have a thousand components, like your biological networks, they have a thousand or ten thousand components. You take networks that have millions of components, like social networks or the internet today.

And you find that the pathways that lead between them, how many handshakes do I need to connect to people? Everyone already knows this trope, right? Or how many coughs or sneezes do I need to transmit a virus from China to the rest of the world? And it turns out that the pathways are extremely short, usually single digit numbers. And this we find in all of these networks. So we ask ourselves, okay, what is this thing about the social network that leads to these very, very short pathways?

And the answer is nothing. This has nothing to do with social networks because we see it in social networks. We see it in biological networks. We see it in ecological networks. We see it in the brain. Practically every network you put our hands on is a small world that has extremely short pathways. The second discovery is the one thing everyone asks about networks, right, is how many friends do I have? In technical terms, this is what is the degree of a node? How many connections does a node have?

Bar Abashi is a very known scientist. I also did my postdoc with him in Boston, and I just recently visited him during my sabbatical. So we work a lot together. We're also good friends. Back in 1999, he's a young scientist, and he decides to look at the structure of these networks, and he finds that they don't really have an average node.

They are what we call scale-free. What is a scale-free network? So technically speaking, what they found is, I mean, okay, so let's take a step back. Usually when we look at how many, I don't know, what the height of a human being. So the average human being, say male, is about a meter 80.

Okay, if you are in the States, I don't know, 5'6 or something. I appreciate that. Yeah, it's about that. Okay, but let's do the scientists. You should be able to cope with the metric system. So the average human being is, say, 1.80m, but then there are some people who are slightly taller, slightly shorter than that. So we have a little bit of variance, and there are these very rare people, like the NBA players who are about 2m tall, but that's about as tall as you have. A person who's 2m tall,

is likely to go through their entire life without ever meeting anyone taller than they are. But when we ask how many friends you have on Facebook or how many interactions a protein has with its surrounding molecules, the picture is fundamentally different. The vast majority of nodes have a very, very small number of neighbors.

and then there are some that are 10 times, 100 times, 1000 times or even 10,000 times more connected than average. Now let's go back to the height distribution to put this in perspective.

Building multi-agent software is hard. Agent-to-agent and agent-to-tool communication is still the Wild West. It's clearly the emerging future, but how do you achieve accuracy and consistency in non-deterministic agentic apps? That's where agency comes in. They have a very clever spelling. Here's how it goes.

A-G-N-T-C-Y. I'll give it to you again in a minute. The agency is an open source collective building the Internet of Agents. The Internet of Agents is a collaboration layer where AI agents can communicate, discover each other, and work across frameworks. For developers, this means standardizing agent discovery tools.

seamless protocols for interagent communication, and modular components to compose and scale multi-agent workflows. Build with other engineers who care about high-quality multi-agent software. See where you can fit in this ecosystem. Visit agency.org and add your support. That's A-G-N-T-C-Y dot O-R-G.

Everyone's talking about AI these days, right? It's changing how we work, how we learn, and how we interact with the world at a tremendous pace. It's a gold rush at the frontier. But if we're not careful, we might end up in a heap of trouble.

That's why I recommend you check out Red Hat's podcast, Compiler. This season on Compiler, they're diving deep into how AI is reshaping the world we live in. From the ethics of automation to the code behind machine learning, it's breaking down the requirements, capabilities, and implications of using AI. Check out the new season of Compiler, an original podcast from Red Hat. Subscribe now wherever you get podcasts.

Now let's go back to the height distribution to put this in perspective.

Discovering a protein that is a thousand times more connected than average is like going on the street of Tel Aviv or New York and meeting a person who is two kilometers tall. This has never happened. In networks, this is not the exception. This is the rule. So heterogeneity is the game. This is something we see across the board in any one of these networks. And this leads the scientific community to understand that there's something more fundamental

Something more fundamental going on here, and that's what physicists like me, I was brought up as a physicist, that's what we like, right? So we like this universality, and this pretty much led to the idea of network science. Not the science of protein networks, not the science of social networks, the science of networks. That's the whole point. To me, as a physicist, that seemed to indicate that these networks should be extremely unstable.

Because that means that everything that happens, like every little protein that shakes, every neuron that sparks, is supposed to impact the entire system. Everything is so close to one another. Of course, we know that systems don't behave like that. In fact, they are extremely robust and extremely stable. So I started asking myself, what is the trick? When we say that the pathways are extremely short,

we're making a structural statement. We're saying that the structure, the architecture of the network has very short pathways. When we're saying that influence spreads very rapidly through the system, we're now making a dynamic statement. Is it that clear to us that structure and dynamics are so intuitively connected to one another? Maybe the story is a little bit more complex than that. Does structure really translate so easily into dynamics?

The appeal of network science was that everything seemed the same. Everything seemed universal. You don't need to be an expert in biology. You don't need to be an expert in brain science or in ecology. Look at the network. All the networks, statistically speaking, are the same. But if you want to talk about the behavior of the system, the dynamics of the system, you need to add another layer of description. I will start with the very intuitive level, and then I'll say a sentence about the hard math.

Basically, what are you doing when you present a social network? You're abstracting all the details of the system. You're saying, oh, all the people are points and all their social connections are edges, are lines, are links between these points. When you draw a pair of points, a pair of nodes and a link between them, what does that link represent? Does it indicate that node A transmits a virus to node B, like in a social network?

Does it indicate that node A consumes node B, like in an ecological network? Or does it indicate that node A chemically binds to node B, like in a protein interaction network? Those are different mechanisms of interaction. So if you really want to understand behavior, you need two layers. Structure and

And then the mechanism of interaction that I would say runs on that structure. So the structure just tells you who interacts with whom. But the mechanism is, what is this interaction? What is the mechanism that drives the interaction between the nodes? Suddenly, social networks, biological networks, brain networks, they all have different interactions.

The universality, the fact that they all look the same was when we just looked at structure. But if you want to look about the actual mechanisms of interaction, brains are very different from social systems and power systems are very different from ecological systems. They don't have the same mechanisms.

Now, I promised a little bit of mathematical technicalities. So this second layer, the mechanism of interaction, the way we describe this is through a differential equation. The differential equation tells me, the structure of the equation tells me, is this a mechanism of

Virus transmission, is this a mechanism of load transfer, like in a power system, or is this a mechanism of information spreading, like in an online social network? If you ask a network scientist, what is your brain, they will say it's 100 billion neurons connected through a network.

If you ask people in my lab, what is a brain? They would say, oh, it's 100 billion coupled nonlinear differential equations. The network is what equation is coupled to what other equations. Okay, but that was very, very technical.

Roughly speaking, what it is, is we have the underlying structure. On top of that structure, we have mechanisms. And here is where the bad news begins. Because once we add these mechanisms, all the universality seems to be kind of fall apart.

suddenly, yes, your brain and Facebook, statistically speaking, they have similar structural features. But when we look at the actual behavior of the system, when we start adding these nonlinear equations,

Suddenly, everything becomes diverse and very, very unpredictable. So we kind of broke down the universality. That's the bad news. I promise good news, okay? Well, yeah, I'm curious because we started out with the discussion about how the structure is universal, that things like the small world phenomenon, you're seeing them everywhere in these seemingly unrelated places.

Can we get past this by maybe having a taxonomy of mechanisms? Could two things that have the same mechanism work in the same way? Or is every network grouping truly unique once we finally look at the dynamics?

I'll take this step by step. The answer eventually is exactly what you said, but I need to say what you said in different words to actually get the insight behind that answer. You know, when we ask ourselves about network behavior, the one thing me and you never talked about, okay, I mean, you already mentioned the solution. The taxonomy is the solution. We don't use this word. We use a different word, but that's not the point.

You already came to the solution, but we actually didn't ask, what is the question? I mean, okay, so we didn't talk about the behavior of a network, but what do we mean by dynamic behavior? We mentioned a couple of things, but what is the one encompassing idea behind network behavior? So let's think about it. What did we mention? We talked about social networks, and there it seemed like the natural question about dynamics is how does a disease spread in a social network?

Or how do ideas or fads or fashions or rumors spread in the social network? Anyhow, so we talk about blackouts and we talk about epidemic spreading and we talk about signals propagating in the brain or biochemical information propagating between genes. The one thing that I think all of these share in common is that all of them talk about how information propagates in network environment. If you are home with COVID-19,

That is information that propagated in the form of viruses. And if you're looking at how signals spread in the brain, it's information that propagates in the form of bioelectrical signals. So the mechanisms, we already talked about that, the mechanisms are different. But the one universal question that we're always interested in is how does information spread through the system?

It turns out, and this is precisely where you were going, is that when we look at these patterns of information spread, again, from a bird's eye view, the macroscopic patterns of how information spreads through the system, then we rediscover that universality.

Still, systems are very different from one another. Still, when you look at them at first glance, everything seems diverse and unpredictable. But it turns out that we can put this diversity into discrete boxes. We don't call it a taxonomy. In physics, we like to call them universality classes. Mathematicians like to call them equivalence classes. But once I know what box you're in, and a lot of different systems may be within the same box, I know that you have similar behavior.

I'm curious about those universal classes. I think I get where you're going that, yeah, we can kind of group some that are similar together. Maybe social networks and virus transmission are more similar and power systems and road systems have a similarity or something along these lines. Do we have a firm list of classes or is this sort of an emerging labeling phenomenon?

When I mentioned to a person, I grew up as a physicist, as I already mentioned. So to me, these examples are kind of, they flow up as we speak. When I talk to a person, I say, let's say, water, sound, light, and sports stadium.

What do they all have in common from a physicist's perspective? Do you know? Water, sound, light, and a sports stadium. I'm not sure off the top of my head. Okay, I'll give you my answer. Waves. Ah, yes. Okay, they're all characterized by waves, right? Now, if you look at them as a person who is not scientifically brought up, or if you never knew that they were all called waves...

There's absolutely no reason why you would look at the ocean or you would look at the light emitted from your light bulb above your head and you would think that this is the same phenomenon.

or listen to my words and think that, oh, I'm seeing the exact same phenomenon or a wave at a stadium. All of these look very, very different. The mechanisms are different. The particles are different. Everything is different about them. But from a physicist's perspective, they're all the same phenomena because they're all solutions to the same equation. And yes, there are details that are different, but they have universal characteristics shared. Now, why did I mention this example? Because it shows you that very, very, seemingly very, very different systems...

might be actually described in terms of similar patterns.

And it's the same here. This taxonomy is not as intuitive as you would think. It groups together systems that you might not think have something in common, but mathematically we find that they do. You have a network. The network has some dynamic state, say all the proteins in their current concentration, all the neurons in their current activity levels. One of those components, one of those nodes suddenly spikes. It sparks.

your friend suddenly got the flu, and now they became infected. Before, they were not. So they changed their state. This spike instigates a signal, which impacts the neighbors, and then the next neighbors, and it starts spreading through the system. In mathematical terms, we call these perturbations. We made small perturbations in the system, and we look at how these perturbations spread through the system. And we can ask many different questions now. We can ask

Are the perturbations going to spread rapidly or slowly? From the time that I became sick, how much time will it take the virus to reach someone who is four or five steps removed from me?

We can ask, where will the signal that was instigated at that one node, where will it be observed at different times in the network? Which nodes will respond first? Which nodes will respond later? So there are many, many different questions we can ask about this signal propagation. So I'll go to one of these questions. Let's ask the question of how much time.

How much time will it take a signal to reach from one node, which is the source of the perturbation, to some other node, some other place in the network, which is the target of the perturbation? Basically, it's a question about the time from the initial failure or the time from the initial probiotic that you just took and introduced some perturbation to one of your bacteria in your gut to the time that it actually changed the population of some other target bacteria through the network.

So what are the typical timescales of this propagation? We started by observing simulations. We took the same network, the exact same network. And we just coupled that same network with different types of dynamic mechanisms, like neuronal mechanisms, social mechanisms, disease spreading mechanisms, and so on. All the different equations that I was talking about.

And we instigated the same signal. We just repeated the exact same experiment. And then we tracked how the signal spreads through the system. Now, I know what the listeners are saying. Oh, we know the answer. The signal starts in the middle, and then the neighbors, and then the next neighbors, and then the next, next neighbors. And that's how things spread in the network, right?

It doesn't work that way. We found that our different systems, we tried a lot of different systems. Each one of them, some of them had, for some of them this worked, but for some of them we had completely counterintuitive forms of spread. Same network, same signal, same experiment, just a different dynamic mechanism.

And we found fundamentally different forms of spread. Sometimes the signal was actually seen at the periphery of the network and only then collapsed inside. Very counterintuitive. This is what happens when you couple complex, very heterogeneous networks with nonlinear differential equations. You get extremely unexpected behavior.

So when a person would look at our results, and I want the listeners to picture those different results, if you see the propagation patterns and they are visibly different, they would say everything here is diverse and unpredictable. The network was the same network, but the dynamics is all over the place. Let's start with the pencil and paper idea. I want you for a second, again, I'm a visual person. Let's forget about the network. Let's just picture two components, two nodes.

I perturb the node on the left and I just look at the response of its immediate neighbor. So there's no network here, just how one node passes information to its nearest neighboring node. This doesn't happen instantaneously. It takes time. Once a protein was spiked, its neighboring protein is going to say, oh, look at that. The protein next to me just raised their concentration.

I'm absorbing the information, I'm now responding to the information, and I will now increase my own concentration or maybe decrease my concentration. I will respond to that change in the behavior of my neighbor. But that doesn't happen instantaneously. There's a typical timescale for how much time this takes to happen. If you look at the response time of every node in the system,

that response time depends on that node's degree. So if that node has a lot of neighbors or a few neighbors, that impacts its response time. I am responding to my nearest neighbor, but if I have a thousand other neighbors, they're also affecting me. They might be pulling me down. They might be enhancing, giving me feedback. So they will be affecting the way I respond.

So it turns out that the response time of a node scales with that node's degree.

But the difference is, what is the scaling? Imagine that, for example, in a biological network, the scaling is like that degree squared. So the amount of time it takes a node to respond scales with its degree to the power two. So you're 10 times more connected than I am because you're a friendly guy and you have a podcast and everyone wants to have beer with you, right? But if your response time...

scales like your degree squared, it means that you're 10 times more connected. You're also 100 times slower because it takes you more time to respond. But now let's change the dynamics. Go for a biological system. Now me and you are no longer proteins. Now we are people in a disease spreading system.

Here, it's the other way around. Now your response time scales negatively with degree. If you're 10 times more connected than I am, you're also 10 times faster to respond. The network has these very, very highly connected nodes, and they really do govern the dynamics of the system. Our initial intuition that the hubs, those very connected nodes, they determine how the system will behave. Yeah, that was correct.

What was incorrect is that we just assumed that we know how they determine the behavior of the system. But the answer is no. Sometimes they are the bottlenecks. I mean, I like to look at the network structure as the roadmap, the static roadmap, and the network behavior as the traffic patterns on that roadmap. It's a metaphor, but it's a very good metaphor to have in mind. Now, we all know that the same roadmap can have very different traffic patterns at 8 a.m. and at 2 p.m. in the afternoon.

And this is how the analogy works. The same network can have very, very different information propagation patterns if it's a biological network or a social network or a brain network, depending on the type of mechanism of interaction between the nodes. So the network is a static roadmap. The information spreading is like the cars

And now it all boils down to the question, are the hubs the traffic jams or are the hubs the freeways? Mathematically, there's more diversity than that. But if we want to boil it down to a simple, intuitive framework, the different dynamical systems, they look all over the place. But once you know that this is what you're looking for, you're looking for the GPS that

Where are the traffic jams and where are the freeways, right? The GPS doesn't look at the distance. In fact, we call this the network GPS. So when you look at this through the lens of the network GPS, suddenly all of this diversity becomes extremely universal. And systems can be boxed into three different classes of dynamics.

Some systems, I mentioned gene regulation and some ecological systems, depending on the equations that you believe run the system, belong to the box where the hubs are the bottlenecks. Hubs are extremely stable, they respond very slowly, and they basically mitigate the spread. They don't actually help the spread.

They slow it down. On the opposite side, there is the box where the hubs are the freeways. Social networks, disease spreading, belongs to this box. The hubs, the people who are very well connected, they are the highways that spread things very, very rapidly. Some ecological networks belong to there too. So once again, you see an ecological network and a disease spreading network. If you would think about them from a mechanistic perspective, they would be, you wouldn't classify them as the same system. But in our classification, they both fall in the same box.

Those are the box where the hubs are the expediters. In the middle, there is the case where the degree just doesn't matter.

Mathematically speaking, it's when your response time scales like your degree to the power zero. So this is the zero universality class. We talked about k to the power two. That's the two universality class. We talked about k to the minus one. That's the minus one universality class. You can understand that there's also the one, the three, the minus three. Okay, so there are more than three boxes, but I just distinguish between the major departments.

In the middle, there is the zero universality class. That's a very interesting universality class. It means that the hubs just don't matter.

This is a very democratic system. Now remember where we started. We're back at the turn of the century. The big discovery about networks is that they're scale-free, that they have this extreme diversity, and they're completely undemocratic. There are nodes that are a thousand times more connected than average. But if you're in the middle box, at the zero universality class, the fact that you're scale-free has absolutely no consequences on the dynamics. Those are precisely the systems that

where the only thing that matters is your distance. Your neighbors, then your next neighbors, then your next, next neighbors, because it doesn't matter if your neighbor is a hub or a small node, they're roughly all the same. So that would be like a naive GPS that just looks at the distance between locations and doesn't think about the fact that this is a very jammed junction and this junction is a very free one.

By introducing these new classes, do we re-achieve universality? Precisely. So no one sees that I just gave you two thumbs up. This is precisely the message. If I had one message, I have several messages, but the first message that I think comes from the research in my lab is that

Structure is too crude. If you don't add dynamics, you get this one unifying universality class. All the systems look the same. If you add dynamics without our boxes, then it's just all over the place. Everything is diverse and unpredictable. Scientists don't like that. Systems that are diverse and unpredictable are also not interesting. We cannot say anything about them.

We take the middle ground. What we show is that, yes, once you add dynamics, this one unifying universality class is broken down, but it doesn't shatter into pieces. It breaks down into discrete boxes.

And once you do the math and you know how to classify your system, what box it is in, suddenly what seems diverse and unpredictable makes sense. You can now see, oh, I can make a prediction here. I know this pathway, which is enriched with hubs, is going to be a very, very slow pathway or a very, very fast pathway. And this is precisely the kind of, we did that. We actually laid out the nodes based on these calculations and suddenly everything fell into place. Everything looked like the way you would think it should look like from inside out.

So is there a grand unified equation that can describe the dynamics of all networks all at once? There is one. Actually, we like to call it the Barzil-Barabasi equation, but unfortunately, we're the only ones who are calling it that. It's a very simple equation. It just makes very little assumptions about the system, so it's very generic.

The equation just assumes that every node has some activity. Let's call that X . The activity depends on context. It can be the instantaneous load on a power system component, it can be the probability of you being infected, it can be the concentration of a protein or the activity expression level of a gene at any given time, and it changes with time. And the equation shows how X interacts with all its neighbors. So it just has two components.

The first component is what we call the self-dynamics. Imagine you put a protein in the middle of the room, absent any interactions. What is it going to do? It's going to do something. It's going to undergo degradation. Maybe it's going to duplicate. Something's going to happen. There is some self-mechanism that happens without any interactions. That's the self-dynamics. So it's a nonlinear component of the equation that describes your self-dynamics.

The second term is a little bit more complicated. It describes the interaction between you and all your other neighbors. So it's just two terms. Each one of these terms is kind of a nonlinear function. And if you tell me, that's my expertise, right? If you come to me and tell me, I think my network does this. It's proteins. What they do to each other is that they chemically bind and then one of them annihilates the other or whatever. You tell me what is your understanding of the mechanism of your system.

And then our expertise is to translate this mechanism into the appropriate nonlinear equation. Well, nonlinear differential equations seem to prohibit the possibility of a quick symbolic solution or a real, you know, like Excel calculation that gives me the answer. Right. What challenges do you face working in this regime? What is the brain? The brain, from our perspective, 100 billion neurons, is 100 billion coupled nonlinear differential equations. And the answer is that even if you give me one nonlinear differential equation,

More often than not, I'm not going to know how to solve it. Most equations we don't know how to solve. Of course, if you give me 10, 100, 1000, or a billion coupled differential equations, it's completely hopeless. So we have two tools at our disposal. The first tool, which is, it's not ours, everyone has that, you just need to program, we can simulate the equations.

So if you give me 100, okay, 100 billion is kind of too much for us. But if you give me 100,000 coupled nonlinear equations, I can simulate them in a computer in my lab. Okay, I've already forgotten how to do that, but my students can still do that. And that's where we get what we call the ground truth. I mean, what the computer says is what the equations really do.

Can I solve them symbolically with pencil and paper? Of course not. I'm not a magician. But what we have is we developed a toolbox to ask these equations questions and receive answers without actually solving the complete equation. One of them I just told you, for example, how does your response time change?

scale with your degree. That's not a complete solution to the equation. I didn't solve, I didn't give you X of T for every one of my components. But this more high-level question, what we've developed is a set of mathematical toolbox that helps us answer questions like that without actually solving the complete equation. The one trick that physicists or any scientist has up their sleeves is the notion of symmetry.

The idea is to identify what is the right symmetry. What does symmetry do? Symmetry helps us reduce the system. For example, if I need to talk about...

black and white dots, it's very hard to convey the information to you. I think to get it exact, this is black, this is white, there's a lot of information. But if they're organized like a checkerboard, it's a much more symmetric structure, then I can very compactly describe the system to you with just two parameters. Okay, start with white and black and you can complete it from there, right? So symmetry helps us simplify complicated systems. The most natural symmetry that we all talk about is the average.

For example, the statistical physicist looks at a gas. It has billions and billions of atoms, and you cannot solve all the equations for all these atoms. You say, oh, let's solve it for the average atom.

Why am I saying that this is a symmetry? Because when you solve something for the average atom, you're kind of assuming that all atoms are roughly the same. So whenever we solve something for the average, which is fundamentally in science almost everything we ever do, we're making an assumption of symmetry, that all our components

are roughly the same, similar enough so that we can model them by the one typical average component. But that symmetry is completely irrelevant in complex networks because the one thing I told you about complex networks is that they are not average.

You have nodes with one friend, 10 friends, 100 friends, 1,000 friends. So nodes are extremely diverse. You cannot, so if you solve for the average node, you're going to be completely missing the behavior of the system. The whole, I mean, think about the solution I told you. It all depends on whether the hubs are bottlenecks or whether the hubs are expediters. Well, if you solve for the average node, you lose this effect. There is no hub in the average node. The whole point is that it's all driven by the outstanding nodes, the ones that are not average.

The symmetry that we assume is that maybe you have 100 friends, so you're a hub, and I just have two friends, so I'm a very small node. So I am very different from you. But let's talk about your friends and my friends. My assumption is that your 100 friends...

were extracted from the same statistical bucket as my two friends. Me and you are individually very different. We see a very different surrounding. My whole information about the network comes from two friends. Your information about the network comes from your hundred friends. You see the network very differently from the way I see it. But your hundred friends and my two friends, they are on average the same. And that's the assumption. That was the one trick we

We didn't make it. I mean, we did make it up. We're not the only ones who made it up, right? I mean, a lot of people have come up with this idea. It's called a degree-based mean field. But I mean, our kind of contribution to the world is that we have shown how we can use this trick to get things like how time scales scale with degree and so on. Well, there's a lot of real-world networks that various organizations are looking at. The obvious ones like Facebook Social Network or Google looking at the internet as a graph.

And even, you know, smaller companies have everyone's got a network in some sense. I think largely they're getting good at asking, let's call it the statistical questions about the network. You know, what's what's his degree? You know, what's the average number of inbound links, this kind of things. But maybe they're not as good about asking about dynamics. What are the types of questions and organizations should start with?

I think the answer is that sometimes when you look at data, you really don't know what to be looking for. The idea that you have these unifying principles, that you have a prediction about the system. Here I have a prediction that response time scales with degree. Now I know what to be looking for. Now I know how to organize my data in a meaningful way. So here's like, I think what everyone asks themselves, okay, I want to influence the network. So I want to know who are the influencers.

That's a different question from influencers are not necessarily the source of information. It's a different question. So here's how I frame the question. Imagine that you're a human cell. You have roughly 15,000 genes. You perturbed one gene. That's the source of the information. You measured the response on another gene. That's the target of the information. What do the other 14,998 genes do? They weren't the source of the information.

They weren't the target of the information. They were the mediators. They were the pipelines. They were the pathways through which the information spread.

It turns out that most of the time, you are not the source or the target of the information. You're a pipeline. And if you really want to understand how information spreads in the system, you want to know how good a node, a link, or a pathway is in being an intermediate. I mean, think about it. On Twitter, what do you do more? You tweet or do you retweet or like? Tweeting is being the source of the information. Retweeting or liking is actually being the pipeline.

It's being the mediator of the spread of the information. Let's just talk about nodes. Who are the nodes that are the best mediators of information in the system? And once again, the intuition says, go to the hubs. There's more details here, but roughly speaking, there are three boxes. One box is really that. It's the 80-20. It's the Pareto rule box.

The vast majority of information, 80% of information is mediated by 20% of the nodes. There are clear pipelines of information. The hubs are the main mediators of the information. So they are the ones governing the flow of information in the system. If you shut them down, information in the system will really stop flowing. But then there is a middle box.

where once again, the scale-free structure is completely divorced from the dynamic behavior. All the nodes kind of participate equally. We call this homogeneous or democratic spread. It's not centered on the hubs. All the nodes participate in the information spread evenly. And the last box, and that's the most counterintuitive box,

is the box where information actually favors the peripheral pathways and tends to avoid going through the hubs. You shut down the hubs, you barely affect the spread of information. Well, first of all, information reaches that node from somewhere in the network.

The node receives that information, and in order to pass it on, it needs to respond to the information. It needs to do something. The information needs to change the state of the node, and then that node passes the information on to its neighbors. So to be a good pipeline, it's not enough to be a good influencer. You also need to be very attentive. You need to be a good listener. You need to be a good responder.

Okay, so here's my life story. You go to a party. You want to spread a rumor in the room. So you find this person who has a thousand friends and you say, oh, I'll whisper on her ear and she'll pass the rumor on to all her thousand friends.

But you're not taking into account that you're not the only one whispering on her ear. There are 999 other people whispering on her ear. She's just not listening. You think I should target the hubs to path information. It depends on what box you're in. If you're in the box where the hubs are the shock absorbers, their role in the network is actually the opposite.

They actually hinder the spread of information. They don't allow for the flow. Now, we think about networks as entities that are designed for information spread. But that made us think, no, no, no, wait, it's the other way around.

So sometimes what networks are designed to do is to buffer information. Sometimes it contributes to better spread of information. And in many of the systems, what we seek is stability. And stability means that you want to buffer the spread of information. And in these kind of systems, the role of the hubs is actually to stabilize the system.

Not to allow efficient information spread, to be the shock absorbers that make sure that information doesn't spread between different modules, that there's no spillover from one region of the brain to another region of the brain. We don't want this situation where every time a neuron spikes, your entire brain lights up. That's not a good situation for us. So networks are not necessarily designed to spread information. They're sometimes designed to buffer information.

Well, I know we initially got in touch, Asaf connected us over the topic of hypergraphs. I'm actually glad, prefer we talked about the dynamics. I think this is a super interesting conversation. Maybe we'll touch on that in the future together at some point. The hypergraphs, yeah, it's a completely different topic. We entered it recently and, you know, I...

You decide, but I feel it's a topic for a different podcast. That's my feeling. Yeah, I'm excited to follow it then, see where it goes. Maybe we'll touch base down the line. What can you tell me about opmed.ai and maybe its intersection with some of the ideas we've been discussing?

So in OpMed.ai, we were asking ourselves, where can we apply this to a real-world problem? I always had, I mean, I'm a theoretician, but I had it at the tip of my fingers to create some product, do something real, have some, you know, make a dent in the world that goes outside the lab. Of course, I wouldn't have done it. The only credit I really take for OpMed.ai is that I teamed up with good partners who were actually able to create a product. And the product looks at hospitals.

Why hospitals? Because, you know, we look at infrastructure networks and...

We think this is the most crucial infrastructure there is. Our lives literally depend on that infrastructure. And hospitals need to be very agile and diverse. They constantly respond to perturbations. You know, you create a schedule, but then something goes over time and suddenly one of your nurses gets COVID and you need to get them to isolation. So you constantly deal with changes and perturbations to your complex network. And there's this domino effect. So you kind of see where the network comes into play.

We're now applying it to many different fields, but our killer application, the one we started with was the operating rooms.

In the operating rooms, you have a given set of resources. You have a number of rooms, a number of staff members, a number of anesthesiologists, nurses, doctors, and you want to treat as many people as you can. So the operating rooms are the bottleneck of modern hospitals. The idea is to use networks to kind of design a smarter schedule, a schedule that will fit more procedures within the existing resources.

and a schedule that will be more robust and resilient to these kind of perturbations where something goes over time. But you can create a smart schedule that can absorb that. Remember those shock absorbers? Create a smart schedule that when something goes over time, it doesn't unravel your entire day.

So the hospital managers, they see more profit. The OR is like 70% of the hospital income. So they're happy. The hospital staff members, they see a less hectic work environment and they feel a little bit more efficient and more predictable. And of course, the stakeholders are not just the stakeholders. It's all of us. Because for us patients, it means that instead of waiting six months for your procedure,

We can rearrange it in a way, in a more efficient way, and actually fit you in within one month or within two months. We're working with a major hospital, Mayo Clinic, in their cardiovascular. They told us the difference between a six-month wait time in cardio and a two-month wait time is many times the difference between life and death.

or at least the difference between not having complications and having complications down the road. So for me, that's it. I mean, that's the one thing we never know. I mean, I hear from the doctors that they're happier. I hear from the managers that they get, that they were able to make more profit margins. I never hear from the patients, but I know that there was a patient who was able to get their treatment sooner than later. And

and they suffered less or maybe even lived. Well, that's a fantastic outcome. Nothing else changed, right? It was just coordination through the algorithm rather than they bought new equipment or anything, just better use of what they already had. Yeah, exactly. It's all about optimizing. You have your given resources, given constraints. There are many constraints there, of course. And within that...

I can find you a lot of slack that you did not know you have. And what's next for you? So this obviously, you know, what happens is that startups already feed back into your research. So our research is, you know, our main leg of research is still in the network dynamics. But we're now looking at two other things. I mean, because dynamics is about predicting the network behavior. And the startup I just told you about is about influencing the network behavior. It's about optimizing the network.

can we design the network towards a desired behavior? So for example, if a network failed,

Can we intervene to kind of transition it back to a better, to the desired state that we want it to be? Can we design the network? We have a big project with a power company in Israel on how to make small interventions to the way they design the structure of the network that will make it more functionally resilient and have less blackouts.

So that's the next step, and that is optimizing your network towards the desired behavior. Your interventions are either structural, you can add links, remove nodes, change the wiring, or dynamical. Yeah, ping, ping this node a little bit. I'll give you an example. You have a microbiome network in your gut, many, many bacteria. It's a whole ecological system in your gut. And when it reaches an undesired state, you might be, there's a lot of health, detrimental health implications to that.

How can you intervene? I cannot rewire anything there. I mean, I don't determine which bacteria interact with which other bacteria. The only intervention I can do is a dynamical intervention. I can, for example, introduce a boost of a certain bacterial species with the hope that it will push the network back towards activity. We do this. We take probiotics. That's exactly what it does.

But with our network dynamics research, we can find, okay, what is the optimal bacteria? What should you take? How often should you take it? What would be the dosage? So how can you either structurally, that's the power system, or dynamically, that's like the microbiome, intervene in order to optimize your system for a desired behavior? Not just predict, but also influence the behavior of the system.

And is there anywhere listeners can follow you online? You can just look up the Barza Lab and you'll follow us. I have some things on YouTube, but I'm not sure. It's not as rich. And now we have this podcast.

If you're interested in completely other areas, I also give talks on humanities, and there's a lot of material online on that. That actually does appear on YouTube, but that's mainly in Hebrew, and I think it's a little bit more niche. For some, but I've heard from more than a few Hebrew-speaking guests, so maybe someone out there will put some links to all of the above in the show notes. Perfect. Well, Bruce, thank you so much for taking the time to come on and share your work.

Thank you so much, Kyle, for listening and for giving me this opportunity. It was fun.