You're listening to Data Skeptic, Graphs and Networks, the podcast exploring how the graph data structure has an impact in science, industry, and elsewhere. Well, welcome to another installment of Data Skeptic, Graphs and Networks. Today, we're going to take on a couple of interesting topics. One I didn't think would necessarily show up in this season, and that's creativity. There's some opportunity to look at the notion of creativity through networks and network science. Asaf, is that something you'd considered before?
Well, I haven't, and that's why I find Yoad Kenneth papers and ideas very interesting, because it's a new field for me. So he's asking different participants to try and solve a particular riddle, and we'll get into that riddle during the interview segment. But through that and through some word associations, which are therefore our network,
He's trying to do some measurements about, I guess, that moment when you sort of have an aha situation, where once you see the answer, it's obvious. But prior to that, it's quite a struggle to find it. What Yoad mentions as an aha moment or the creativity process, he mentions it as a small world effect.
Can you hear me clearly when I say small world? Because my accent, I don't know. It's a bit Israeli Bostonian. No, I hear small world. On my podcast, by the way, I used to say Paolo. But I said Paolo. And everybody asked me, who is Paolo? Is he a Brazilian who works with networks or something? Who's this Paolo you were talking about?
I never had that. I consistently heard power law, but all right. I usually say long tail. That sounds better. No R's. Sure. So you had mentioned the small world effect as a way to describe the creativity process in the mind. When you find shortcuts between different parts of the mind's semantic networks. When I say semantic network, I mean like the association network between words and so on. Small world as a network phenomena was...
actually anecdotally, let's say, observed by Milgram in his famous six degrees experiment and by others. But the small world model by Watts and Strogatz, a paper from 1998, was the first observation on real world networks made by, let's say, exact science.
The paper that they published was one of the seminal papers that brought on the new field of network science, right? And network phenomena we'll see on real-world networks.
The wonderful but problematic side of network science is that it's a multidisciplinary field. I mention this because when Watts, if I remember correctly, when Watts wanted to publish the paper, the small world paper, which is highly cited today, but he had a hard time doing it because
I think the inspiration for the paper came from watching the synchronization of fireflies. The small world effect helped the fireflies to synchronize different parts of the firefly group. So when he tried to publish the paper, the publisher said, well, it's not for the mathematics journal or mechanics because it's biology and vice versa, right? The biology paper said, well, it's in mathematics.
Luckily, Watson-Strogatz made it in the end and the paper was published. Good thing it was. I think it's a really useful insight. Another thing from a network perspective, Yoad also mentions, he mentions it briefly, but the eigenvalue centrality he uses as an algorithm for centrality measures in the network. And just saying that the eigenvalue is a similar and simpler application of page rank. So
It follows the same logic. So if you're connected to central nodes, you are central yourself.
In the eigenvalue centrality, instead of the node's degree, you sum up the degree of the node's neighbors and repeat the process until it converges. If you have high-degree neighbors, you'll get a higher score. Of course, if you have a high degree yourself, you'll get a higher score also. Degree centrality highly correlates with eigenvalue centrality. Well, I wonder if it's faster to compute or something like that. Maybe there's an advantage computationally.
A eigenvalue centrality, every centrality that is based on degrees is highly computable. Sure, yeah, because it's local. But it does take time to converge. Yeah, but still, you know, I think the difference is...
Differences are minimal. Cool. Well, good useful thing then. Not like every particular graph application for sure. I won't let you badmouth graphs or something here. Well, not badmouth, but many interesting graph problems are NP-complete problems. So very difficult to solve in that regard in a practical scale. Okay. So if you're trashing graph theory, I'm with you. No problem. Okay.
Network science, network science, social network analysis. Let's keep it clean. Sounds good. Well, let's jump into our interview then. My name is Yoed Kanet. I am an assistant professor at the Faculty of Data and Decision Sciences at the Technion Israel Institute of Technology in Haifa, Israel.
Could you expand a little bit on what the Data and Decision Sciences group does? Well, to our faculty is interesting, very much a rich, complex space that ranges from people from more theoretical statistics, computer science, data science, and then there's another direction to it, which is about empirical research in behavioral sciences and cognitive psychology, organization psychology. The idea is that
There are different ways that we create data from machines and humans, and we need to develop and learn and improve our ways that we use the data to make better decisions. That's like our philosophy of the faculty. Is there any particular type or area of data you find yourself most interested in? So I'm a cognitive scientist or a cognitive neuroscientist by training, and my
Passion is thinking about how people think and studying the mind and the neural mechanisms that allow us to make all this magic happen. So my interest and the title of my lab is the Cognitive Complexity Lab is on what I call very hand-wavingly cognitive complexity, which is the thing that I think make us the most interesting. And that is actually extremely complicated data, or at least in the past was very hard to study. And I think
My lab integrates computational research with experimental research to try to ask these difficult questions on creativity, intelligence, memory, memory search, how memory changes over time, etc. So this is my flavor of data is actually the sort of abstract kind, the mind, whatever that is.
The mind, in some sense, is a bit of a black box, or maybe historically it was. To what degree can you measure and study it? There's been various ways trying to open up the black box and study this complexity. My sort of most of the time flavor or perspective on this is using what some people call complexity sciences or network sciences and trying to quantify and formalize
the complexity of cognitive and neural systems using these tools and using the language of networks to better cut across different levels of analysis. It used to be very difficult to study the brain and the mind, behavior and neural, and neuroscience. And now we have ways to try slowly trying to create bridges to cut across these levels of analysis.
Well, if we were to shift gears and talk about computer memory, we could have a very precise discussion, right? The manufacturer could tell us exactly how it works and that sort of thing. Human memory, not quite as much. To what degree do we understand how it works?
That's a complicated question because memory, human memory, involves different systems or processes. And actually, there's a lot of division there. We can talk about motor, what's called perceptual memory. So, right, you learn how to ride a bike. That's it. You can always ride a bike. But in what's called explicit memory, memory that you can, a person can actually explicitly recall or discuss,
Usually there is a division between episodic memory and semantic memory. This is the cognitive system that has been traditionally considered to store knowledge, that is a void of time and context. That is like the standard definition. These definitions are changing. There's more and more theories arguing how semantic memory is also dynamic, is a dynamic system.
And a lot of these arguments are based on competition modeling that comes from dynamic system approaches, network science, language models, etc. that show how the system evolves. So this really depends on what type of memory you're talking about. Most of the research that I do and people that use network science in cognitive research focus on some models.
In today's digital age, the sheer volume of personal information scattered across the internet can be daunting. I've been there, and that's why I was intrigued by Delete.me's approach to digital privacy. What sets them apart is their flexible, user-centric process that puts you in control. One of Delete.me's standout features is their customizable privacy protection. When you sign up, you decide exactly how much information you want to protect.
You probably want your business line out there, but not your personal email address. Start small if you prefer. Then expand your protection as you witness their effectiveness first-hand through the detailed removal reports they send you. Their service goes beyond one-time removal. Delete.me actively monitors and eliminates any new or recurring data throughout your subscription period.
Their team of privacy experts handles the complex removal process with hundreds of data brokers, making data privacy protection effortless for you. Experience peace of mind knowing your online privacy is in capable hands. Take control of your data and keep your private life private by signing up for Delete Me. Now at a special discount for our listeners. Get 20% off your Delete Me plan by texting DATA
to 64,000. That's data to 64000. Message and data rates may apply.
Most of the research that I do and people that use network science and cognitive research focus on semantic memory. And semantic memory, we actually know quite a lot because it has been studied in cognitive psychology experimentally for a long time. And there's all these phenomena that we are now trying to quantify as a process over a network system, like associative thinking. I've written a lot about that.
or semantic priming. When I think about a concept, the most salient property of that concept comes about. That's also considered to be as a phenomena of the structure of the nodes that are directly connected to the neighbors of a node, for example. So we're now trying to represent and study processes from a network approach on memory and also what happens when things start to fail, like in Alzheimer's or
maybe close to home, a lot of people experience tip of the tongue phenomena when you get stuck, you can't remember the name. So there's some sort of a phonological trap, you get stuck in the name. And then there's a lot of semantic information that is activated. And we have used multi-layered networks to try and represent how different layers of the system of this multidimensional model of our knowledge.
So I went really associative away a little bit from your question. We know quite a lot about memory. This is, again, cognitive psychology. I don't attribute to all of the findings, but we're using network science. We're slowly trying to quantify ideas and concepts that were very much hand-wavingly used, like associations or structure of knowledge or
close versus far or strong versus weakly related concept, we now have ways through network science to try to quantify that. And that's slowly accumulating knowledge, but it's a slow process. So we know a lot. Now for episodic memory, we know there is a ton of research in psychology and neuroscience on episodic memory. Very little, if not any network research using this approach.
And that's an interesting thing that we can talk about as well. Well, I'd love to zoom in on one of your particular experiments, the one described in Changes in Semantic Memory Structure Supports Successful Problem Solving and Analogical Transfer. And especially my favorite part, the use of riddles in the process. Can you describe the experimental setup? So I want to zoom out and give a little bit of context to what motivated this study, which I must say,
was in a collaboration with a fantastic collaboration that I have with several research groups in Paris, in the Brain Spine Institute. Actually, they did all the work. I just contributed some of my methods and expertise. The story talks about the issue of insight.
When the light bulb flashes and I finally get it. I'm trying to solve a problem. I'm stuck. I don't know what to do. I put it aside. I do something else. Suddenly it clicks. I get it. Something happens. There is a lot of theory, but this is very hard to study empirically. Why? Because unlike science,
sort of standard psychology studies, you can't really predict when a person will have an insight or not. And you can't really elicit a lot of insight moments. But with insight, there are at least two things that happen. But when that moment happens, you feel joy. You feel happy. You're excited. You get it. There are studies that show that people trust these ideas more and that these ideas are often very much correct and are the right path. So intuition works.
But then a second thing happens. Memory changes. The structure of the problem changes. Finding the solution creates new edges or creates a new structure in memory. It leads to what some call memory restructuring. I got into network science and my empirical research because of a fantastic paper that was published in 2005 talking about the small world network structure of insight and how insight
is a result of edges that are created in our memory system that creates shortcuts. We set out to study that empirically and directly with our network science tools. So that paper has a lot of moving parts, actually.
And a lot of complicated analysis, but the design is very simple. We ask people to solve a riddle, and let's do this together and see, well, if you read it, you know the answer. But before and after they try to solve the riddle, we use our mind method to represent their individual structure of words that are either related to the problem, the solution of the problem, or not.
And we're going to see whether people that solve the riddle, whether that leads to some effect in their post-estimated memory structure. In 1992, again, way before the massive hype of network science was Trogas and Watts' 1999 FEMTASC paper, they came out a paper by a group of people. The first author's last name was Dorsos. It's called often the Dorsos problem. They did exactly what we're doing now. And they asked people one riddle.
The riddle goes like a man goes into a bar. He looks at the barman, bartender, and asks for a glass of water. The bartender looks at him, takes out a shotgun. The man says thank you and walks away. That's the riddle. And then we ask what happens. Maybe it's like a moment for pause in the podcast and have people say, what happened? There's all these interesting guesses. Unlike mainstream in-lab research of creativity,
where we study a lot the ability to generate multiple responses. Here there's one solution, but that solution is difficult to arrive at. And before and after they try to solve this problem, they have 10 minutes for that, we ask them to look at word pairs and ask them simply how related these words are to each other.
shotgun and man, bartender and water, things like that. And they do this for all possible pairs of a list of 20 words. And we do this exactly the same before and after 10 minutes where they try to solve the riddle.
Throughout solving the riddle, they can say whatever guesses they can, and we, at the end, give them feedback if they solve it or not. Should I say the solution for the riddle? Do you think we've built up enough tension? Are we ready for it? I know, I've been stalling. Exactly. Well, so the solution for this specific riddle, and I'm happy to give it away because it's pretty much like an old... A lot of people are familiar with this solution, is that the man who came to the bar, he had hiccups,
And he asked for a glass of water, but when the bartender pulled out the shotgun, that frightened him also and helped him with the hiccups, which is why he said thank you and walked away. And then a lot of the times what happens when you tell solutions, I've told this
in many talks, people go like, oh, or they laugh, or they say, I got it. And you see this is such an interesting effect to see how, what happens when people get something. In our study, we have four different riddles. So I gave only one, but actually we have four of these that are
Similar in difficulty. And we wanted to know what will happen for people that solve the riddles compared to people that are not successful in solving the riddles with regard to their memory structure that we formally quantify through a network science approach. That's the heart of the study.
You're measuring their memory structure, I guess the semnet. Is that through the word pairs or is there some other mechanism there? That's through this word pair approach. So this is based on previous work of mine where we represent a very large scale semantic network in Hebrew. From that network, we chose word pairs that varied in length and the edge length, the path length between the words.
And we use that to manipulate, again, a very, very basic common method that is used in psycholinguistic psychology, where people see two words and all they are asked to do is whether these two words are...
well sometimes it's this word versus non-word but in what we asked them are these two words relate to each other or not and we looked at the the effect of distance in our network on people's ability to judge relatedness that's called semantic priming or mediated priming and this is a paper that came out in 2017 but what we also found in that paper was that the distance effect also predicted how people judge the subject subjective strength of the relation between words
We show that distance of a network predicts subjective ratings of judgments. And then with an Austrian colleague of mine, Matthias Benedek, we use this to go in the other direction. We argue that subjective ratings of word pairs is a good enough proxy of an individual's organization of these concepts. That's the heart of this approach that we've been using since to represent individual-based
semantic memory networks. So of the participants who came to the table with a slightly different semantic semnet of their own, is there something categorically different about people that could solve the riddle versus couldn't? Or are they truly learning in the moment?
So I think you are, or I understand what you ask is two different questions. At the starting point before they come in, we assume that their memory structure for these 20 words are the same across individuals. It is a fair assumption. What we were interested in is what happens after in the post estimated network, comparing people that solve the riddle versus not.
There we got into a different problem, an equally hard problem, which is the fact that very few people manage to solve our riddles. We had this design where we have people try to solve a riddle, one of the four riddles that we had, and then they do another riddle. So people did two riddles, and they were counterbalanced. And we did this to look at whether there is a thing that's called an analogical transfer.
whether solving successfully one problem helps you better understand how to solve a similarly structured problem, an analogical problem. There was very low solution proportions in the riddles. They change across the four riddles that we use. The strongest one that we had was about 15% solution. So actually, we need very large samples.
And we get very few people that solve it. So this already creates unequal groups and creates a problem with convincing that we find something that is, again, reliable and valid. We do find differences between them. We just had to work harder and sweat it a little bit more. So this is why we had another study. We were then challenged with another problem is that what happens when
If a person gets the solution during trying to solve it, does that impact similarly their network structure? So at the end, we pretty much have three different types of groups. Solvers, which are the minority, non-solvers, which are the majority, and another set of smaller group of people that we give them the solutions and look at what's different. And we do find differences that are uniquely specific to solving the riddle.
but we're not able to argue, to make any causal arguments because we're just doing this assessment after they try to solve the riddles. This,
They could have the insight throughout. This is a 10-minute process, so maybe already in the beginning they get it and then that's it. Or they can slowly build the understanding that the words matter or something happens and then that leads to the insight. There's all these different things that could happen we don't know. What we do know is that there is statistically differences in the changes
that happen over the post-estimated network between solving the riddle and non-solving the riddle and getting the solution.
So is that to say that people who had maybe better enriched or more sophisticated word association networks did better or that they developed and evolved their network and that helped them with the solution? The argument is more that solving the riddle leads to the memory, the downstream memory restructuring effect. But it can also be the reverse. We just can't at this point dissociate. And the argument is that more creative people are better in
And this goes to a lot of the research that I've done over the years and I'm still doing is that more creative individuals have more richer, more flexible memory structures and also are better able to search wider and deeper in their memory. And that should facilitate and enhance their ability to solve these types of riddles.
Even though it wasn't as strong as we thought, which is why it's only reporting the supplement, we do find that creativity of the individuals impacted the effect that we find. But yeah, what's driven what is something that we have still much more to do, and we are slowly moving forward in that direction. Could you give a rough definition of analogical transfer?
The idea is that we can, once we understand one thing, we can sort of map it to a different domain. The way that it's most widely studied in the lab is you give a person the sort of definitions A is to B like C is to X and they have to sort of come up with. But it's like a cop is to firefighter like anything else, doctor is to something. And so you have to find relations.
We as humans are really good in creating sort of analogical relations or mapping between different domains. So we are good in pattern recognition, and then we are good in sometimes, I guess, using that, transferring that from one domain to the other. That is very important in creativity as well.
And there's a lot of research on that. I don't do that research at all. But then a question that comes out of this is, can we transfer this skill, this ability to make connections or have insights on how things relate to each other from one domain to the other? That's what's called analogical transfer. And what we do is exactly that. We ask whether I managed to solve the dorsal riddle on my own. I know that
I needed to create a connection or I now have a connection between shotgun and water and hiccups. So that's like what we sometimes call as a cognitive schema, a structure in our mental representation that then we can map a new problem on the same schema and use the same logic. And that facilitates our ability to solve problems. And again, there's a lot of research on that. And we find that, but we can predict that.
Based on memory restructuring of our networks, that will happen. This is what we bring to the table in this specific table. So after round one, you've got your solvers, your non-solvers, and some people you gave the solution to. And also, I guess, the semantic network that they've shared with you. What of those things are most predictive of their ability to solve the second riddle?
We have two different ways to analyze our, try to ask your question and other types of similar questions. One is we had other people rate, they saw the same pairs, all these pairs of words for each solution separately, and they gave a rating of the impact or the importance of those specific two words to the solution.
So some words, again, shotgun and water, for example, are more impactful in solving the riddle than others. So that's one type of signal or dependent variable that we have for, that we look at how our network metrics relate to or predict the impact scoring. The more a network changes in a specific way, the more it will be able to account for or predict impact scoring or this sort of impact relevance score.
The other thing that we did was look at the sort of remoteness of the words to each other. We use language models to quantify the distance or the dissimilarity or inverse of similarity between the concepts as a general measure of how remote they are in this aggregated mental model.
So it's not just about what helps solve analogous problems, what helps solve the problems themselves, because again, most people can't solve the problems. Two things happen that we relate to terms like efficiency and eigenvective centrality and also, quote, clustering. These are the things that we focus on. So what we find is that for solvers, what happens, and that has downstream effects for what you asked, two things seem to happen. One is that
Word pairs that are more related to the solution, they become closer, stronger, and the efficiency in that network increases. And that seems to help facilitate, again, also adjacent or analogous problems.
But then there's this sort of halo effect that happens where words that were farther apart, originally they become closer. And that's also a marker of higher creativity. This is what we find for higher creative individuals, that words are closer in their memory structure. And we think that helps these ideas of searching and connectivity. When everything's closer, then it's easier if you think about diffusion models or energy landscapes and all these different types of models.
That effect of things getting closer is a unique marker for solving. If you give people the solution, solution-related terms become more strongly related or are rated as more strongly relevant, of course, because they know the solution. But they're not better in solving analogous problems and they don't have this sort of condensing effect of their memory structure. And I think that's like a unique marker that we did not expect originally.
Well, the use of the word pairs is a really interesting measurement. You're generating some novel data there. You're getting right from your subject and they have no incentive to mislead you or whatever. They're just sharing their thoughts. So it's a nice direct measurement. I know it's out of scope for the paper, but you could have also stuck them in an MRI machine or something like that. Do you think there would have been useful information along a path like that? We are actually doing that as well. And another paper that came out from that collaboration with that group
actually did exactly that. We had people do the same word pairing method that we just talked about, but undergoing brain scanning. And this is a paper of ours that also came out a couple months ago. And we showed that the brain is also tuned to this distance effect.
So as the distance between words becomes larger, then there is this interesting change in the brain systems that are tracking the relations between the words. So we found neural evidence that can work. And a lot of my work that uses neuroimaging tries exactly that, to bridge across semantic or cognitive networks and brain networks and using methods to look at
how information is stored in the brain and how it relates to the sort of more abstract maps and how they interact with each other to make the magic happen. Well, science for science's own sake is always something good to me. There's things to be learned about the human mind and memory and things like that. But do you have any downstream thoughts on where your research could lead in terms of practical applications?
I have a paper that came a couple years ago with the title of a semantic cartography of the creative mind. So a lot of my work is about mapping memory structure. But in the last couple of years, especially since I started my position as a technician, I've been also trying to transition that basic information into more applied because I do want to help people
We are slowly building tools, one of them based on our ability to represent semantic memory networks and social thinking processes. A work of ours that came out last year where we built this tool to push people out of mental fixation. So what we do is we identify where a person gets stuck in their memory, and then we offer them words that try to pull them out and move them to a different part of the space.
And we're fairly successful with that. We've done some work over the last couple years on aging. How does the aging lexicon or the aging memory system change? And in what ways? There's this maturization effect. Learning is good and it has real impact on our ability to make insights and creativity and whatnot. But it can also lead to a cost in the sense that
At some point, memory becomes, with expertise or maturization or learning, knowledge becomes very structured to a point that it can be very rigid. And we've used math theories that came from physics that are called percolation theory to quantify the breaking of the system, of the network. And we use that as a measure of flexibility. So we've shown that older adults, their memory structures are more organized,
but are less flexible. And then a future aspiration is to start trying to find ways to help them recreate or create new edges in their memory to sort of balance this structure expertise versus flexibility effects. And work that I've done with clinical populations, like with autism, also showed this effect.
A lot of my current fixation or obsession is with question asking. I mentioned this before and how we can study question asking and how asking complex questions facilitate better information acquisition and how that can help
impact memory structure. So this again, where the networks fits in the interaction with language models and how that impacts. There's so many directions. There's so many things that we need to carefully think about and to develop in ways that constantly maximizes the potential and not pushes us in potentially suboptimal directions.
Well, the large language models you'd mentioned have a certain analogy here. They can sometimes solve riddles. They can associate words and things like that. But perhaps they do it in a very exotic way, which is unhuman-like. I'm not sure. Do you have any thoughts on how good of an analog they are for the human mind? This is a dangerous question because I have a...
Not common stream approach of thought. But I will say this. One paper that came out really a couple months ago, they took different language models and they represented the semantic network of the language model and compared it to humans. And what they show is that for all the models that they use, LAMA, different GPT, OpenAI models, their memory semantic network that they estimate similarly to how we do this with humans was...
more structured, more rigid, less creative in a way. And they argue that paper that indicates that the machines are less creative than humans. And I really like that because I think that also something that we should really think about. There's a lot of papers saying the machine is more creative than humans. I'm not sold on that. One paper that I really like
show that different models of GPT are more creative in a way than humans, but the distribution of their responses are narrower. So they are more homogenized in their responses than humans. And that's something that is slowly getting attention in the cognitive neuroscience community of creativity research in the fact of
how much of that is a potential problem for future generations if we're going to impact narrower in a way people thinking that's something that we need to be careful with there's a lot more to carefully think about on language models
And there's all these examples of how they are so clever, but also so dumb at the same time. There's a lot of these insights and papers that talk about all this. It's a model, it's a black box, and we should be careful with attributing human-like features to it at this point. Even though it's very tempting, it's very appealing. You talk to it and it's like, oh my God, it tells you everything, all of that.
One of my concerns, for example, is this issue of question asking. And we came up with a theoretical paper where we show how we where we argue that we need to teach and train people to ask more to create more complex prompts to get more complex information to enrich the process of what we call co-creativity of human machine interacting together. We just need to
Not just only, hey, here's a tool you can ask whatever. People don't know how to ask good questions. Let's show them what are better questions than others to try and slowly build a language with this machine because it's not going anywhere for sure. And it does a lot of good, but also it has its limitations and most people are not that aware or careful with language.
the limitations because it is very complex. Well, one topic we've touched on a little bit that I know spans a large amount of your research is creativity, which at first glance seems like something difficult to measure. Could you talk a little bit about the role that's played in your research?
So, yeah, creativity, I, well, people, some people agree that I call it, I think of it as a relatively young field of science. For a very long time, people didn't want to study creativity at all, or it was thought it's not something you can study because it's
you can't really define it and you can't really measure it. And I said before that 2025 is a special year because while it marks the 50th year of the seminal paper on memory structure, it also marks 75 years to a presidential address. It goes back to the early 20th century, but in 1950, there was a president of the American Psychological Association giving an inaugural speech
speech saying we need to talk about creativity. And then that really helped, but it was still very hard to study, mostly till about the end of the 1990s or mid-1990s, where a cognitive perspective came about in studying creativity and how to think about creativity. And the argument there was, we don't know what creativity is. And in a way, we still don't. We don't have a comprehensive definition of what this thing is. Creativity means a lot of very different things that are very different from each other.
We don't see creativity in the brain. And there's all these myths on creativity, that creativity resides in the right hemisphere of the brain. We fairly know that's not true. Creativity actually involves all of the brain in very complex dynamics. Using computational methods that came in mostly from language models really helped improve how we are assessing creativity. So we are now in a really good place in a way
of how to measure creativity, but there is a cost as well, which is we narrow down what we are talking about from a lab perspective, from a scientific perspective, we are mostly focusing on verbal creativity and even more about this notion of originality of ideas. It's really easy to study novelty because you can measure in issues of rare, rareness or uncommonness or, you know,
with language models frequency of how responses relate. But that means that the space of what we can study has been shrinking a little bit. This is now expanding.
We have been developing tools to assess figural or visual creativity with automatic tools. And slowly we're trying to catch up in finding ways and how to measure creativity. But it's still very hard because...
It's a very elusive type of construct and people talk about it in different ways. So my research, maybe I should have gotten there very early, focuses on the role of knowledge and creativity. And I studied from my PhD until today, a theory that was also very intuitive that was proposed in 1960s. It's called the Associative Theory of Creativity.
that argues that creative individuals have something different with the memory system that allows them to search and expand and connect. And this is where I wanted to use networks to try and study formally these ideas and have been doing that ever since. And we have shown
The structure and the process operated really impact one's ability to be creative from very different, from different properties and different aspects, from searching all the way to this restructuring effects and connectivity and things like that. The World Economic Forum, every once in a while, just a couple of months ago,
publishes the future of job reports where they talk about how what will happen for jobs in 5, 10, 20 years from now. Always creativity is one of the top five capacities, human capacities that are critical for the future of jobs.
That's amazing in my eyes. It's amazing not just because it indicates how important this thing is. It's also amazing because there's no understanding of how to enhance that. If it's that important, then we should actually push people to try and strengthen their creativity muscles. But creativity is not in the way that people think about it with art, which is really important because there's a lot of positive aspects of creating.
So this is where I think creativity is not getting enough credit as it should. And hopefully that is changing, but still slow. So yeah, I could go on and on about creativity. And Yoad, is there anywhere listeners can follow you online and keep up with your work?
Yes, so I have a Twitter handle, a Blue Sky handle, a LinkedIn account, a ResearchGate account, an academia.edu account, a website, our lab, ccl.technion.technion.acil, and that's the Coggin Complexity Lab. Search for me, and I hope people will reach out. This is, again, for me, was a fantastic opportunity to talk about, to sort of bridge the
again, data science with empirical research and show how networks allows us to do these amazing work that were very hard to do not that long ago. And I think we need more cross-disciplinary discussions. And I want to thank you for this opportunity to be here. And it's an invitation for anyone who's interested to
to reach out and to talk about, about the space that I'm, I'm in. Yeah. I think there's a lot of growth potential. It'll be exciting space to watch for sure. Yeah. Well, thank you so much for taking the time to come on and share your work. Thank you so much for, for having me. I really enjoyed it.