We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Episode 146: Gaurav Venkataraman discusses memory in DNA and RNA

Episode 146: Gaurav Venkataraman discusses memory in DNA and RNA

2023/3/30
logo of podcast Elucidations

Elucidations

Transcript

Shownotes Transcript

Hello, and welcome to Elucidations, an unexpected philosophy podcast. I'm Matt Teichman, and with me today is Gaurav Venkataraman, a co-founder of TriskBio in London. And he's here to talk about memory in DNA and RNA. Gaurav Venkataraman, welcome. Thank you, Matt. It's great to be here.

Okay, so if you asked me where I thought memories were stored, and I'm not like, you know, an ancient Greek philosopher, but I'm like a contemporary Matt Teichman, I would think that it was stored in the brain. But apparently, you've done a little bit of research to suggest that the story is a little bit more complicated than that. So what, like, maybe we could just start by talking about what are some like indications that the story might be more complicated than that?

Yeah, sure. So the complexity of the story probably starts around the 1950s. That's before what's called, what was known as synaptic memory hypothesis had really taken hold, which states that memories are stored in the synaptic weights between neurons. That's the connection shrinks, the kind of thing that neural networks are predicated on. Is storage even the right word for this? Or is it more like there's an action happening in the system? Or is that the right metaphor? That's a great question. And like,

I don't think anybody yet knows. So the term that people use for memory storage is the engram. Like, where's the engram, which would be like the neural substrate of memory? And so the question is, and this was very hotly debated in the 50s, like, well, is it a molecule? Is it like a strand of DNA, a strand of RNA, a certain protein, maybe like prion, that somehow is long lived and therefore like the brain can use it as a memory?

Is it something like more dynamic? So in modern parliance, it's thought that like sort of it's a firing pattern that's sort of kept in working memory. And that's like something more dynamic. Is it something like intracellular, like calcium waves inside of neurons that's serving as a memory? I think...

it comes down to sort of a, almost a philosophy of mind question or a cognitive science question to say, well, is the brain something that's sort of just constantly dynamically responding to the environment? Or is it something where you're sort of accessing abstract representations of things and

And those representations are somehow molecularly or neurophysiologically encoded. You might have seen, like several years ago, there was this instance of what people called the Jennifer Aniston neuron, in which some neuroscientists... They had the haircut? Is that what it was called? Well, I read this many years ago, so I'm maybe butchering it, but there was this neuroscientist at UCLA called Itzhak Fried, and in collaboration with Christoph Coe, who was there at the time, now at the Allen Institute, they were doing some electrophysiology experiments in patients that were undergoing neurosurgery.

And so they would do recordings of neurons and show people pictures of various things like apples, their grandmother, Jennifer Aniston. And what they reported, again, this is,

many years after I've read it, was that like in certain people, there was like a specific neuron that fired every time the receipts shown a picture of Jennifer Aniston. And so the perhaps conclusion there is that, you know, that neuron was like representing Jennifer Aniston in the brain, right? It's at least an indication maybe that a recall is happening or I don't know.

Sure, something like that. So you can interpret it up and down the stack, right? Yeah, absolutely. On one end, you could say, yes, that's a Jennifer Aniston neuron, and your brain is basically a computer that then uses the representation of Jennifer Aniston to compute. Or you could say, yeah, well, there's some phenomenological thing that's happening, and that neuron was the exhaust fume of the process that was set in motion by looking at the photo of Jennifer Aniston. Maybe has no...

nothing in fact to do with the mental experience. Exactly. Yeah. And so I'm sure that in the paper they did some sort of control or had some sort of argument about this point. But I think fundamentally it's very difficult to draw a distinction between those two interpretations.

given how little we know about the brain and how nascent sort of our recording techniques are, right? Like one way you might argue that is, well, is it an exhaust tumor or is it a representation? You could say, well, to figure that out, show me the rest of the brain. And the answer is you can't see the rest of the brain. So you kind of have to make a leap of faith. And you can't see the rest of the brain. Like, why is that exactly? I mean, what if I do an MRI or isn't that the whole brain or?

So you can do an MRI, but you lose spatial resolution, right? So you can't see individual neurons. So, you know, the brain is like densely packed and all that stuff. And so, you know, you can do a functional MRI, but that's very, you have problems with that. So there you're measuring deoxygenated blood flow. And so you're worried about like deconvolving the hemodynamic response function from the actual neural activity. So any way you try to get in there, you're dealing with trade-offs. Unlike looking at your face, it's difficult to get a convincing picture of

Right. I mean, to look at somebody's face, you don't have to get inside of anything would be one. Exactly. It's like, I think Ed Boyden had a great quote about this. And I forget it now, but it was like, you know, the brain is just difficult to study because it's like densely packed inside the brain, difficult to access and like perturbing any part of it, like changes the other parts. Right. And so even in mice, you know, when, if you want to do,

Worst case, via death. Exactly. But even in mice, right, if you want to do like a deep recording in the brain or something, you have to cut out like part of the cortex to see into it, right? And so then you have your scientific conclusion and you wonder, well, I wonder if this would have changed had I left the part of the cortex back in, right? It could have an effect on what you're observing. Yeah, exactly. And, you know, people are aware of this and they try to do controls, but like fundamentally it's, you know, it's something that you have to deal with just based on the...

recording techniques. And so Adam Marblestone and George Church and others are trying to work on non-destructive, non-invasive neural techniques. And so maybe we'll get there and then we'll figure out the truth about the gender-francised neuron. Hmm.

Yeah. It's especially interesting to me how the metaphors we reach for in this stuff seem like they're connected to machines. Yeah. So when I was asking, well, does it make sense to think of memories as being stored? Clearly, I have either flash or magnetic storage in mind as a metaphor. So a flash drive, if you store something on it and then unplug the machine,

the thing persists. And whereas, uh, if, information is loaded in computer's memory and you unplug the computer, then the information vanishes, you know? So maybe that's what I was getting out of the metaphor, but I feel like whenever I try to unpack one of these metaphors, I always, the thing I always reach for is like a robot or a machine or a computer or something like that. It seems it's always tempting. Yeah. I think, um,

I definitely thought about memory in those terms for a long time, and then I got convinced not to think about them in those terms. Like, I was thinking about them literally at the point of like, okay, well, where's the ROM and where's the RAM in the brain, right? Thinking, oh, maybe DNA is the ROM, RNA is the RAM, something like that. And then I read this paper by Phil Agree, who is a collaborator of David Chapman back in the AI lab at MIT in the 80s. And he wrote this little vignette that was very impactful to me called Writing and Representation.

And he and Chapman were working on this idea. They were trying to create AI agents. And so as a preliminary work, they're trying to figure out like, well, what kind of representations does the brain store, if any? And those would be like computer programs that act like people who are reasoning about what to do. Exactly. Yeah.

And at the time, I think Agri or Chapman, one of them was working under Rodney Brooks, or vaguely associated with Rodney Brooks, who at that time was putting out these papers called Intelligence Without Representation, the kind of ideas which I think led to the Roomba, although some people contest that claim. Yeah.

showing, okay, well, let's have a robot. I think I actually made a robot that was like bumbling around the halls. And we're not going to implement any sort of planning system in the robot. We're just going to implement sort of these dynamic response functions. And what we see is that the robot behaves as if it's planning in sort of this like emergent way, but it fundamentally comes from just like this like stimulus response loop.

And so they were like really trying to understand like whether you needed abstract representations to get intelligence and also as a related but not identical issue, whether people actually used abstract representations. And so Augury wrote this... Abstract representation in this context, would you think of it, it's kind of like a mental picture or picture in your mind or how would you define that? So how Aguirre in writing and representation is that

abstract representations of things are what we tend to use with tools. So when you write, like you use the word chair, and that's an abstract representation of like a chair. But a chair can mean like many things to many different people in different contexts, right? So the chair that I'm sitting on, the chair that you are sitting on, the chair that I'm throwing at you. And those are like, in a sense, different objects, right? And we condense those into the abstract representation of chair, right?

to use the technology of writing. So going back to your question, this is a very long-winded answer, of this computer metaphor in the brain, Agri made this point which I thought was extremely thought-provoking, which is that we often look to our tools and try and use those tools as metaphors to make AI progress or understand the brain, but we actually build tools specifically to do things that our brains are bad at,

And so trying to then like build tools to do things that our brain are bad at and then use those tools to map back onto our brain and understand them is actually putting things like almost exactly backwards. And that kind of dooms you to like never actually understand the brain because you're starting with concepts that your brain like doesn't use almost definitionally. Yeah, I couldn't agree more. I couldn't agree more. Yeah. And so...

And yet I still do it, but I, but I totally agree. Yeah, we all do it. Um, and there's another take on this, like, uh, so Ian McGilchrist's work is on my mind because two days ago he released his new book, uh, the matter with things, which I highly recommend in addition to the master and his emissary. And, you know, he has a point, um,

based on hemisphere differences that like we're too focused on sort of reductionist mechanical explanations of the world but in any case i think like trying to think about the brain as a computer or as a turing machine is how i thought about it for a long time and i think it's probably the wrong way to think about it but i don't say that flippantly like oh if you think about it that way like you're wrong or it's not going to be generative but um if you choose to think about that way you should be like try to be as cautious as possible about like the

the baggage that your metaphor is going to bring to bear. Be prepared to encounter some challenges. Exactly. Yeah. Yeah. So maybe to return to the question we started with, which is like where memories are stored, if they're stored anywhere, what's the motivation for thinking they might be stored in unexpected places? Right, right. So it goes back to, we were talking about the 1950s and this is sort of before the synaptic memory hypothesis was firm. Um,

And at that time, you know, it was very appealing to think that, you know, memories could be stored in molecules, right? Molecules are long lasting. And again, if you want to make a mechanical analogy, it's like, well, like,

Magnetic tape. Exactly. It's magnetic tape. And if you look at DNA, you're like, oh, well, this could be like a ticker tape. It does kind of look like tape. Exactly. And in sort of the modern era, there are synthetic biologists like Eric Winfrey that has literally made Turing complete machines out of DNA. And Turing complete means it can do everything that any computer can do. Yeah, in theory. In theory. Yeah, but not... And there are no like...

That work has not put thermodynamic constraints on what the speed of the computation is, etc. They do it thus far in test tubes. I think it might be some in vitro in VOR, but mostly in test tubes to show that these circuits can compute. So in any case, people were thinking about this, and there was this guy who initially was a TV producer and then became a scientist afterwards called James McConnell. And he was working with this organism called the planarian flatworm.

The Plurian flatworm is the simplest known organism to have a two-hemisphere brain. And it has a two-hemisphere brain and its little head, and it's got little, like, kind of ear-looking things that do, like, sensory input. It's adorable. It's adorable. It's an adorable little worm. It, like, moves around, slithers kind of cutely. It has this amazing...

regenerative capacity. So you can cut it up into reportedly like 237 or something little pieces. 237. That's the number. I think something like that. That is such a giant and random number. Yeah. So the, I wouldn't necessarily stand by that number. I don't know exactly where it's like in the mythology, but basically I've cut those things up quite a lot and you can cut it into pretty small pieces and you can, it'll regrow an organism.

And so McConnell had this idea, one, working with these flatworms, what if I taught it something? He was a psychologist and he was trying to teach them things. And he said, okay, well, they seem to be learning. What happens if I cut it in half? Like, which half will, like, remember?

And so he started making this claim that both the head and the tail would sort of remember the memory. And so that would suggest that the memory is at least not stored in sort of like the planarian brain. It could still be stored in sort of synaptic connections around the periphery of the worm, but at the very least, you would have to admit that it's not stored in the brain connections.

Right. That does seem intuitive. Man, it's almost like a Star Trek transporter blooper experiment, except it's with different parts of the body, I guess, is the difference. Well, it got very Star Trek when McConnell did his next experiments, which were cannibalisms. So these worms are cannibalistic when hungry. And so what he would do is sort of train a group of worms, grind them up, and feed them to naive worms. And he claimed that the memory transferred over from the worms he had trained to the naive worms. Wow. Yeah. Yeah.

How do you test what a little worm can remember? Yeah, and so this was the core problem with the work, the reason why it was ultimately discredited. And I would argue that it's still a problem in neuroscience is that

you have to have a really good behavior and you have to believe that your behavior is what you think it is, right? It is truly a fear memory or truly is a spatial memory or what have you. And so McConnell at the time, it wasn't that far from sort of like Pavlov's experiments, I think. And so people were still thinking a lot about these sort of like shock and light kind of pairings. And so what he was doing is literally electrifying, giving an electric shock to the worms and

and he would give them like a light switch like before the shock, I think. And the idea was that, you know, they would learn to pair the light with the shock. And then recoil when they see the light expecting a shock. Exactly, exactly recoil. And he would look at the worm and like decide if it had recoiled. So there was like, you know, this observer kind of situation, which people do in sort of behavioral instruments, and it's fine. But, you know, you have to be honest about it and careful about it, preferably blinded, that kind of thing.

So that was the behavior. And it's important to note that at the time, this was in like, I want to say like the late 50s-ish, maybe even early 50s. Associative learning was like not like a brand new field, really. So it wasn't like well known, like how to train organisms in general. And so McConnell was trying to advance this kind of memory outside the brain. Is that just a term for like conditioning? Exactly. Things that we think about, say, like, oh, classical conditioning, they're like part of the parliance, like...

the appropriate controls to do for those kind of experiments were worked out like sort of in the 60s and 70s. That was kind of like the heyday of like behavioral learning experiments in mice and things like this. And so now if you go to a neuroscience lab, like we kind of take it for granted. Okay, we're going to do a fear conditioning task in a mouse and everyone sort of knows how to do it. Everyone accepts like

This is the readout that like really is fear. And let's just jump to the, what they consider the interesting thing, which is like the neural correlates, et cetera, et cetera. But this wasn't at all true in the 50s. So they were trying to put together like how to train organisms in general. At the same time, they were putting together this like kind of radical molecular hypothesis that memory could be stored outside of the planarian brain. And so it was an exciting time, but a very dangerous time as well.

So, okay, if it's not stored in the brain and you cut the worm in half, and I'm just imagining like its head's up here and its feet are down here. It doesn't literally have feet, but whatever. The bottom half is down here.

and you can cut them into 237 pieces, is the idea that like the full memories are stored in like every single cell? Or like what is the hypothesis about where the memories come from if not from the brain? Yeah, that's a great point. So what if you cut it in three pieces? What if you cut it in four pieces? Like presumably if you cut it into like the minimal possible spec of a piece and like the memories stored there, like that would tell you something really important.

So as far as I know, no experiments along those lines were done or have been done. Somebody actually suggested that to me because there's a professor at the Whitehead Institute at MIT whose name I'm now blanking on, who's shown that you can generate a planarian from like a few cells. And I think what he did was...

he had basically like a dead worm and he somehow seeded it with some tissues from an alive worm and showed that the live worm could like take over the like dead tissues or something like regenerate itself and so somebody was suggesting exactly that so he said okay well why don't you train an organism like take the single cell and then show the memory can transfer over into like your regenerate organism in this way and then you would really have a strong argument that like it was this kind of single cell memory

So I think something like that could be done and probably should be done if this claim is to be made in the strongest possible terms. But as far as I know, nobody's tried that yet. McConnell's explanation was that the memory was stored in RNA. And how he tried to demonstrate that was by extracting RNA from the planarian flatworms and injecting that into naive flatworms, then claiming that the memory transferred over. But there's another really important thing about the behavioral experiments that we should emphasize here, which is that

It wasn't straight up that the memory would transfer over. His claim was that after I transferred the RNA, the worms would get...

you could retrain the worms on the same task faster than you normally could relative to like some similar task. They dimly recalled what the other dead worm remembered. Exactly. And so like they needed this like reminder training, but not as much training. And so already you're thinking like, okay, well, so you're scoring the stuff like the field doesn't really know how to do behavioral experiments yet. And it's not even a straight up memory transfer. It like involves this free training. Like seems like there's kind of a lot of like a lot of ifs going on here.

But the field was kind of accepting it. And for a while, McConnell was riding high. So he became, I believe, a tenure professor at the University of Michigan. He was getting a lot of NIH funding. And he was also a media darling. So he had a background as a television producer. So he would go on TV and say like, oh yeah, in like five years, we're going to have professor burgers. And instead of going to college, you'll just like eat this burger and then you'll like have knowledge, just like the planarian flatworm can like eat other worms and have knowledge.

He would have loved the Matrix if he'd ever seen it. Yeah, exactly. So this was going on. And there were a lot of other labs that were doing this kind of RNA transfer in like goldfish, salmon monkeys. There were some important controls that got done that kind of like shed doubt on the experiments. Like some people said...

oh, well, it's really just a uric acid effect. In fact, when you do RNA transfer, you're like caffeinating the recipient. And so that's why it can learn faster. It's not like a true memory. And it all came to a head when this Nobel laureate in chemistry got interested in the sort of molecular base of memory. And his lab tried to reproduce the behavior and claimed that they could not reproduce it.

And the field kind of scattered I can't say I'm shocked by that And the field kind of scattered at that point And there were like some McConnell's experiments were not great But some of them were pretty good And some of the other experiments were pretty good

And so various neuroscientists and sort of biophysicists have been interested in this idea over time. As we learn more about the brain and the weird role that RNA seems to play in the brain, just like a lot of non-coding RNA that doesn't seem to encode for proteins, it gets expressed, which is metabolically expensive. It seems to get very specifically trafficked, so it has very specific sort of cellular locations in the brain. And so it feels like that non-coding RNA should be doing something,

If you sort of squint at it, and again, you're willing to excuse the machine metaphor, it kind of looks like a software layer. And so there's been this idea that perhaps there was something to these McConnell experiments. There was something, this kind of like RNA memory, RNA computational hypothesis thing.

Yeah, that's what I was going to say, because, you know, you're making it sound like pretty crazy and implausible the way you present the research, and yet you've worked on it. So I'm interested in how we get from there to like, well, maybe we should continue exploring this hypothesis. Yeah, so you always have to evaluate the craziness of a hypothesis with respect to its plausibility along like all the domains that you can think of, right? Yeah, like how does it ramify in every possible thing you can observe? Exactly. And so part of the reason why I got interested in it was really because of

sociologist of science, Harry Collins. And so he had a student that has written this really tome of a dissertation. I went and saw the physical copy in Wales and it's like, it's two books, single space, like 800 pages each typewritten that did an extremely deep study of like all of sort of the memory transfer, RNA memory experiments and

like why they were disbelieved and like what happened because there were a lot of related experience about like various molecules storing memories etc etc and that body of sociological work made me feel like

oh, okay, there's like actually some pretty gold nuggets in here of work that like looks really good, has not been disproved, and that actually makes sense in light of all the molecular details that we have learned since the 50s and makes sense in light of like what we've learned about like classical conditioning and how to appropriately design the experiments. So given the benefit of sort of hindsight, these actually look like really convincing experiments. What's an example of something we've learned about molecular biology since...

the original time of the experiment that this seems to fit with. Yeah, so I would say the non-coding RNA expression in the brain definitely is, I'd say, the major thing. And John Maddock has been, like, the major proponent of that. He was, like, standing up for sort of non-coding RNA. It was considered junk. The way that ideas work in biology is people observe something...

everyone assumes that it's like junk and like some framework is erected under which like it's just junk and then later it's discovered to be functional non-coding means it doesn't in any way ramify and like how our you know bodies are shaped or how we metabolize stuff no so it

It doesn't mean that. It means that it just means that it doesn't make protein. Oh, okay. No protein. So it's non-coding in the sense that like, oh, well, if you believe that the role of DNA is to make protein, so-called central dogma of molecular biology, then like this is non-coding because it doesn't encode for proteins. Okay. But what Matic has argued is like this stuff is very functional and exactly what you said actually, that it's extremely important for, for example, body patterning. And there's like tons of evidence for that now. And now there's like tons of high-precision labs studying non-coding RNA and all its wonderful functions. Hmm.

And so it's now known that this non-coding RNA is very valuable, but initially it was just considered to be junk. So it fits in well with the hypothesis that RNA might have another job besides to make proteins. Exactly. And non-coding RNA in the brain, in my view, is like particularly suspicious for all the reasons that I...

articulated there's like a lot of it it seems to be like differentially expressed seems to be expressed in very specific locations if you look up at the synapse or these things called rna granules sitting there and so all of that makes you wonder about like what not cutting rna is doing in the brain and if matic was onto something

The other behavioral experiments that had been done at the time was like this guy, Mike Levin at Tufts, had revisited McConnell's experiments with a different, better sort of assay and in fully automated systems. So, you know, it's much more objective than what McConnell was doing in the 50s. And he claimed that plenary memories were short outside of the brain. And so for those reasons, it seemed like the work was worth revisiting.

But how would you explore this on an animal that you can't regenerate? Because it seems like the plenary worms have this very special feature, which is they grow back when you cut them. Yeah, I guess that's the third reason why their work seemed like worth repeating. So people were learning about epigenetics and transgenerational effects of memory in like mouse models. So things like this guy, Kerry Ressler, who was at...

I think Emory at the time, now is at Harvard, did fear conditioning in a mouse and claimed in a nature neuroscience paper that the fear was passed on to progeny.

Eric Misca's lab did a, like, a nutritional assay in a mouse, I think, or some sort of nutritional modification, and showed that there were, like, methylation marks, I believe, that were, like, then passed on to progeny. And so this idea that, like, there was kind of some experiential stuff getting passed on to progeny, even in mammals, was emerging from the literature. Extremely controversial, but emerging nonetheless. Yeah.

So, yeah, memories, cognitive abilities, these kinds of things that would at best seem to be the exception rather than the rule. That getting passed from generation to generation. Sure. You don't certainly like knowledge of a language does not at all get passed. Yeah. Generation to generation. Yeah. And you don't. And so, too, for other things. Yeah. And you don't you don't wake up when you're seven years old and then all of a sudden have your have your father's memories. Right. So clearly there's like. Yeah. So clearly there's some filtering that goes on. Right. Yeah.

And so the question is like... But even if anything gets transmitted, that's kind of quite eyebrow racing. Yeah, yeah. So in C. elegans now, I think even bird models and worm models, the idea of transgenerational inheritance of sensitivities and some sort of nutritional states I think is relatively uncontroversial. There are big fields that study it, and they kind of have very reproducible work, and they're working on the mechanisms. So maternal transitional inheritance, things like this,

And in a worm model, in C. elegans model, this guy in North Carolina whose name now is C. elegans is another worm. Is another worm, different worm, 302 neurons, simple nervous system. He actually showed that in some context, RNA from the neurons actually passes through to like the germ line. So this was like a clear example of like some molecules at least kind of moving from like the nervous system to this like more transgenerational setup.

So there's like evidence on the margins that something like this might be going on. But then you got to understand, like, there's a lot of question marks here, right? So it's like... Yeah, definitely. The other thing about like, for example, Kerry Ressler's work is that it wasn't totally clear that, again, the behavior was like a meaningful fear memory, right? So this is still a question in the field. It's like,

How do you know that when your mouse freezes, this is really what you would call a high-order fear memory versus something that you might think is just a lower-order bit that could conceivably be transmitted via some wacky mechanism? Have the initial observations about mice been replicated at all? What exactly have we found in terms of what mice seem to be able to inherit this way?

Yeah, so I think the part that's been replicated thus far is this idea that, like, tRNA fragments, which are like these little bits of RNA, are playing some role in intergenerational inheritance. And there's these papers from Katharina Gapp and Eric Miska, whose lab I was in for a spell, about, like, experiences in what's called an FG generation, so like an older generation, being transmitted down.

And it's different kinds of environmental exposures sort of causing these post-translational modifications. So the generations downstream could not have been familiar with whatever they're exposed to, but they react in a way that suggests they're a little bit familiar with it or stuff like that. Yeah, or even something simple like...

a high stress response, like causing propensity to stress or things like that. Very sort of broad strokes. Yeah, where they wouldn't have any reason at all to actually stress. Exactly, or the propensity to stress. So it's not like, oh, you're...

there was some pairing of, like, a banana smell in the older generation and the younger generation is sensitive to banana smell. That is what Kerry Ressler claimed, and that claim is very controversial. It's kind of true associative learning. But this idea that, like, you generally stress out a mouse and then the subsequent generations are, like, generally fearful, and the tRNA fragments are playing a sort of role, I believe that is, like, pretty uncontroversial, or at least, like, quite well-replicated these days. Yeah.

Okay, I'm convinced this line of inquiry is well motivated. What was some of the stuff you looked into? And what do you find? Yeah, so I guess there are two things that I was able to do. The first was I just revisited Mike Levin's work in collaboration with Mike Levin.

trying to use an even better sort of learning essay. So as I've sort of articulated, you know, your conclusions are really only as strong as your learning essay. So you need something that's robust, that's interpretable, sort of a real fear memory, and that is or is not like straight outside the brain. And then you have your conclusion in the planarian flatworm. So I was using this kind of fear essay that was more associative than what Mike had been using.

And we found some evidence for memory being stored outside the brain without any retraining necessary. So you could just cut the heads off, you could let them grow back, and you would have fearful worms without having to sort of retrain them and argue that like, oh, they just like learn the task faster. So it was a cleaner demonstration of memory outside the brain in the plurian flatworm with like a cleaner, more pure associative learning assay. And so that was like the basic setup of

And that's at the point at which I got the EV grant to move to Eric Miska's lab at Cambridge and try to pursue the hypothesis molecularly with some collaborators of mine. Okay, so that experiment was at the cutting the head off of a worm level. What did you look at at the molecular level? Yeah, and I think going to the molecular level is important because biologists want a molecular story, and they want a molecular story for arguably quite a reasonable reason, which is that, like,

biology is very messy. Your conclusions are statistical, particularly behavioral conclusions. And so it's easier to believe that a behavioral conclusion is reasonable if you have a really strong mechanistic story behind it. It's like you've uncovered the causal backstory behind what you observed. Exactly. And also to find a robust mechanism, you really need a good behavior. You know what I mean? So it's a little bit of a stress test of the behavior as well. In addition to just feeling like a complete story. So he went to Cambridge to try and figure it out. The...

big success that we had there was actually a theoretical one. Because what we wanted to do was just do RNA sequencing. The idea was like, let's RNA sequence before and after learning, let's find sort of the differences and like what the RNA looks like, and then based on that try to come to some sort of conclusion. In the back of my mind, I had these Eric Winfrey circuits that I was describing earlier, where

they were essentially like kind of like these seesaw-like things where you would have an RNA strand come in and then like you would set off this string of like RNA strands hitting each other and have an RNA strand come out and you could do like general purpose computation. And so what I set out to do with a collaborator there in Eric Miska's lab, David Jordan, was figure out what are the thermodynamic constraints on computing with RNA both fast and accurately.

And what we discovered is these sort of like linear circuits have a very hard trade-off between going fast and being accurate.

And so it didn't seem like you could compute, so to speak, fast, so to speak, which are cognitive timescales with these kind of linear circuits that I was kind of trying to look for. And fast here means the experience has a chance to get imprinted on the RNA. That was my guess, right? And then you could make arguments like, oh, well, how fast do you actually need it? What's the timescale? And my answer is, I don't know. All that I know for sure is that there seems to be this really hard tradeoff between going fast and accurately with these kind of circuits. Yeah.

And Winfrey knows this. Although like memories being inaccurate is also, I mean, that's like a well-established thing as well, isn't it? Sure. But what we also discovered was something kind of interesting, which is that if you don't have these linear circuits where A causes B causes C causes D, but instead you have reaction topologies that are sort of all-to-all connected, you can compute both fast and accurately. And in fact, the faster you go, the more accurate you get.

So in other words, if the relevant strands of RNA like have a lot of cycles in them or is that the idea or it's that they can like talk to all the other strands. Basically, there's a lot of interconnectivity going on. Okay. Yeah. Um,

Literally what it means is like if you write down the reaction network, it looks like an all-to-all graph instead of like a linear graph. Yeah, yeah, yeah. And the reasons for that have to do with like thermodynamics and kinetic proofreading and whether you discriminate via binding energies and activation energies, which I won't lecture you on here. Next episode. But it was a really interesting conclusion to come to because...

we realized that these reaction networks actually existed in the brain. The reaction networks that we seem to arrive at thermodynamically looked a lot like RNA granules, which are sitting at the synapse. And so we started thinking about RNA granules, thinking about how RNA granules respond to sort of electrical input. And we came to the hypothesis

that these granules were sort of the site of what you might think of as computation in the brain. Now again, whether you want to say they're the real seat of memory versus the synapse, or whether they're in dynamic play with the synapse, just translating proteins that then get stuck up to the synapse, like all of that is still very much up for debate and there are people like Aaron Schulman who are doing great work along those lines, but we definitely came to the conclusion that the RNA granules are doing something very interesting and very kind of suspicious.

So in other words, you've uncovered one potential job for this RNA that doesn't make proteins in the brain, which previously that was a question mark. Ish, yeah. I think a lot of people have done a lot of work showing that non-coding RNA does a lot of important stuff. And so I wouldn't say we were the guys who realized that non-coding RNA... If there was one guy, so to speak, it was John Maddock, who really taught the community that non-coding RNA was really important.

To the extent that our paper is important, I think hopefully it will provide a theoretical foundation for the community to understand, which they're already understanding experimentally, that RNA granules are extremely important, like, functionally. And we're definitely not the only people to say this. There are, like, many people who are, like, studying RNA granules. I think we're the only people yet to have realized that the structure of this reaction network is, like, very special from a thermodynamic perspective.

man, I feel like just RNA keeps coming up over and over again in the news between the COVID pandemic and, you know. Yeah, I mean, RNA is pretty crazy. Like, thinking of RNA as a software layer of the cell for all the badness of those style analogies, I think it's pretty good. I think it's, like, it gets you pretty far and comes with a lot of very dangerous baggage, but nonetheless, it's, like, a pretty good way to think about things. Hmm.

If it were software, would it be an operating system or a file system? I wonder. Yeah, it's a great point, right? Like it's something like the file system is the operating system, right? Like the strands are the information and they're also like just actively acting on each other.

It's very much like this intelligence without representation, but sort of at the intracellular level where things are just happening dynamically. It's not like there's some planning system that you're going to find. That would be much more like these linear sort of reaction networks. It's going to look like these all-to-all sort of topologies. It's going to look very chaotic. But the reason it looks chaotic is so that you can compute by activation energy differences, which is how you compute both fast and accurately. And how did you test the ability of these RNA networks to perform computations? Like, I mean...

Like when you mentioned performing commutations, I'm just thinking, okay, somehow you fed them an input and somehow you read an output off of them. And then you had some expectation about what the output should be. That's how you determine whether it's accurate or not. And you established this trade-off between speed and accuracy. How did that work? So we didn't do any experiments. It was purely a theoretical paper based on the principles of kinetic proofreading.

So you showed that it was a mathematical fact. Exactly. Okay. There were like a lot of complicated mathematical proofs. I think it was like a 20-page appendix with all these complicated mathematical proofs to show it in general. Eric Winfrey has done the work to show that things compute sort of in vivo, and what he does is use his fluorescent tag. So what you see is this like sort of fluorescent output as a function of sort of inputs. I wonder if this is something that could be of interest to...

designing either software or hardware. Like, you know, could there be... You know, there's this phrase that's sometimes like biomimetics of like various technologies imitating stuff from nature. And like that's kind of another place my brain instantly goes is like, can we use this? Yeah, definitely. I mean, like...

So Microsoft has been interested in kind of like DNA and RNA computation for a long time. There's some people in Cambridge, like Microsoft Research Institute, that are like sort of classic Eric Winfrey style synthetic circuits, but also even more practically people like Twist Biosciences, right? Which...

is a producer of DNA. So they have this new synthesis technology for DNAs, have like big collaborations with Microsoft Research. The idea being like DNA is like an amazing storage substrate. Like you can, it's very small. You can store a lot of information on it. It lasts a super long time with this idea that like there's a data dole in the world and we need to move from like sort of standard hard drives to DNA storage as like the basis of computing. And so DNA,

That's totally independent of how things work in the brain. But just to say that, like, you know, DNA is a great macromolecule. So in terms of the big takeaway for this research, like one thing I've heard about octopuses is that the activity of their central nervous system is more distributed through their bodies than in people. So it's like, is the takeaway that like we're a little bit more like octopuses than we thought?

We're definitely more like octopus than we thought. And I'll tell you an octopus story as like a takeaway here. So when I graduated college, I went to the National Institute of Health in Bethesda. And I did a two-year fellowship there with Miguel Holmgren, a great scientist. And he was working on, at the time, RNA editing, which is sort of enzymes that would edit bases of RNA and therefore like change protein function. And one of his collaborators, Josh Rosenthal, a couple of years after I left, published this big, I believe, cell paper about

showing that octopuses would actually edit their RNA in response to temperature in order to survive in the cold. I think that's right. It's been a lot of times over the research, but it's something along those lines. And so in the same way that octopuses are editing their RNA to respond to the environment, I think humans are probably also editing their RNA to respond to the environment. And memory storage is a part of that.

And RNA editing enzymes are known to operate in humans. But I think this story of like sort of octopuses using RNA as the software layer, if you will, and humans also using RNA as a software layer, distributed or not distributed, I think probably holds true broadly. Gaurav Venkataraman, thanks so much for joining us. Thanks for having me, Matt. The Elucidations blog has moved. We are now located at elucidations.now.sh. On the blog, you can find our full back catalog of previous episodes.

And if you have any questions, please feel free to reach out on Twitter at atelucidationspod. Thanks again for listening.