We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode From Shiny to Strategic: The Maturation of AI Across Industries // David Cox // #303

From Shiny to Strategic: The Maturation of AI Across Industries // David Cox // #303

2025/4/7
logo of podcast MLOps.community

MLOps.community

AI Deep Dive AI Chapters Transcript
People
D
David Cox
Topics
David Cox: 我将 AI 应用于行为科学领域,通过分析环境数据和行为数据来理解人类行为,并帮助人们做出更好的决策。这包括利用强化学习来预测个体行为,并通过调整奖励来引导其行为改变,最终目标是培养内在的奖励机制。在临床环境中,我们可以利用无监督机器学习识别患者群体,并利用监督学习优化临床干预策略,帮助医疗专业人员做出更优的决策,减少医疗错误。此外,我们还可以利用 AI 分析个体的语言和表达方式,从而理解其视角和认知框架,并提供不同的视角和干预策略。在教育领域,我们可以利用无监督机器学习识别学生的学习类型,并提供个性化的学习资源,以提高学生的学习效率和成功率。总而言之,AI 的应用远不止 LLM 热潮,它在各个领域都有着广泛的应用前景,尤其是在需要理解和干预人类行为的领域。 Demetrios: (This participant's core argument is implicitly present through the conversation but not explicitly stated as a single, coherent argument.) The conversation largely focuses on understanding the challenges and opportunities of applying AI to understand and influence human behavior. Demetrios acts as a facilitator, prompting David Cox to elaborate on his work and its implications. The discussion highlights the need for more data and better ways to collect and analyze it to build effective AI-driven solutions for improving health and education. The ethical implications of using AI to predict and influence behavior are also touched upon, underscoring the importance of responsible development and deployment.

Deep Dive

Chapters
David Cox, with expertise spanning bioethics, behavioral analysis, and data science, discusses his work on using AI to understand and improve human behavior. He focuses on applying reinforcement learning principles to analyze behavioral patterns and design interventions that promote healthier choices. Data acquisition from wearables and other sources presents a challenge.
  • AI is applied to behavioral ecology in humans
  • Analysis of antecedent-behavior-consequence chains to predict behavior
  • Challenges in data acquisition for real-world behavior analysis

Shownotes Transcript

Translations:
中文

So my name is David Cox. I take my coffee black, Nespresso style though, so I don't actually make it. I just push the button. We're back with another LL Ops Community Podcast. I'm your host, Demetrios, and talking with David, we got in to the ways that you can look at machine learning, specifically unsupervised machine learning, to help you change the way that you interact with this world.

Behavioral economics was something that I learned and I hope you do too. Let's get into it with him right now. And I will say just as a disclaimer, this was a different kind of philosophical conversation. We did not get super technical on how you can deploy these models and what kind of specs we're looking at, what kind of QPSs and all those other fun acronyms that we like to use.

But I still had a blast talking to him and going on all kinds of tangents. And I hope you do too. We should probably start with... You're in Florida and you're wearing sweaters. How is that possible? Yeah.

Theory of relativity applies even to temperatures, I guess. Right. Yeah. And I'm in Germany wearing t-shirts. Yeah. What is the, what is the temp there for you these days? I guess. I do Celsius. I have been converted. So,

It is zero. And the last couple of days it was like minus five, which for the people out there that do Fahrenheit, obviously zero is 32, but minus five is like...

Yeah, in the 20s, I think. High 20s, mid 20s. That is frigid. I don't think I'd leave my house if it were that cold here in Florida. Yeah, it's funny how you get used to it. It really is. Oh, yeah. Absolutely. And I mean, I grew up in Colorado and a lot of people, as soon as they hear that, they go like, wait a second. But yeah, I don't know. I've been in Florida too long, I guess. You got acclimated to it, man. That's too good. Well, the interesting things that I want to talk to you about are...

Not at all around LLMs. I think we started off this conversation and we really wanted to have this conversation because there is so much more happening in AI than just the LLM boom and the agent boom and the whatever, insert your next type word boom in here, right? Yeah.

What are you working on these days? Yeah. So I work, and it may help to give a little context on my background. That may help. So I got into AI mainly from like behavior science space, you know,

like clinical work with humans, but you can think about what we're doing as more like behavioral ecology, but applied to humans. That is, you know, you look at the environment around people, how might you change that to change behavior, like workplace settings, right? Or clinically things like that. And so then, you know, how I got into AI stuff I'm working on now is you can imagine the environment is incredibly rich, all sorts of stimuli that we proceed, respond to influences our behavior. And the,

And mid-2010s, all of a sudden, a bunch of research started coming out. AI is better at doctors than X and Y and Z. And so I was, you know, researcher interested in human decision-making. I was like, what is this thing? Um...

And that kind of pulled me over into, again, kind of where I'm working now is, you know, how can we take information from the larger environment, sensory modalities, wearable technology, things like that. And from those data, understand why people do what they do and then use that to then help them make better decisions, healthier decisions, live happier, healthier lives. So it's not thinking about what the...

diseases are from signals, from wearables or from blood tests and all that. It's more, why are people deciding to stay on the couch or eat those things?

Exactly. That pizza that I love. Why do I choose that on Saturday instead of some broccoli and hummus? Exactly. Yeah, that's exactly right. And what's kind of fun about that in the area of kind of psychology that I got my training in behavior analysis, there's this kind of pocket of literature called say do correspondence.

Basically, the idea is that we don't always do what we say or what we, you know, if you were to ask me, why do you eat broccoli or pizza instead of broccoli? I might say something, but that doesn't always match up with my behavior or the reasons why I might actually do something. You know, if you look at the data, you really get into it. So that's kind of this really interesting dichotomy, kind of circling back to LLMs, right? Language, text-based.

but may not give us all the information we need to understand, you know, why do you do what you do and how can I use that to help you make better choices? So what are some things that you look at? So it's interestingly, and this is kind of where I also touch point mid 2010s kind of brought me to AI reinforcement learning. I'm guessing a lot of listeners are familiar with.

There's a whole pocket of biological literature, reinforcement learning with biological organisms been around, you know, 100 years, 150 years. And so we look at a lot of the same stuff, you know, you're a rat in an operating chamber, you're a human, you know, scrolling through Twitter, whatever. There's gonna be a bunch of stimuli that are kind of presented to you.

that comes before some behavior you engage in. After you engage that behavior, something happens, right? You know, I make a tweet and someone clicks the like button. And that kind of unit, this antecedent behavior consequence chain is what we kind of talk about. You can analyze those over time. And from that, I can start predicting, you know, based on the behavior you engaged in, what's a reward or a reinforcer for you. And then I can start using that to predict what you'll do next.

Twitter, Meta, they all use the same algorithms for human behavior, right? We're just talking about doing that in context of like health behavior. Yeah. That's why I have so many notifications when I sign on to LinkedIn. And you can't seem to not get distracted by them because they're so good at getting your attention. Yeah. It is killer. And so then how are you trying to help people make the right decisions from that? And basically like...

I still think, break it down, like the data that you're gathering is more on...

how many steps I'm taking and it could be like, it's more exercise or deals with the body or it's also how much screen time I have. Yeah. Could be all related. Right. So there's, um, you can imagine, yeah, I'm not sure how much you sleep 24 hours in a day, minus hours you sleep. You get that, that many hours of behavior. Um, so you can think, uh,

Within this kind of total set of behavior, there's really, it's called the matching law, known principles of biological organisms. You tend to allocate the amount of time to an activity that corresponds with the amount of reward that you get from that thing.

So, you know, if I'm looking to say change your physical number of steps that you take, physical activity levels, I would look and see, you know, how much time do you walk now? What else do you do in your day? What are those kind of things that you find rewarding? Netflix, right? Or whatever, you know, maybe scrolling Twitter, all that kind of fun stuff. And then the question becomes, is there a way that you can contact more reward for physical activity that

than maybe you do from, say, Netflix or something. So you can, a lot of the, and this, you can imagine, has a lot of therapeutic implications. You know, if I'm engaging in a lot of substance abuse, I spent some time doing some research in that area. I want to get you to shift away from, you know, cocaine to something else that's maybe a bit more healthier. So you try to figure out what can I create to compete with the reward that you get from that maybe unhealthy behavioral pattern to healthy, you know, shift it to some kind of healthier behavior pattern. Yeah.

But, you know, now we're also talking like these are data issues, right? Amazing in theory. How do I get the data for you that allows me to really understand your day? And fortunately for the line of work I'm in the last 10 years has been incredible, right? Technology just allows us to collect so much data on so much more than we could even 15, 20 years ago. But still a non-serving challenge. Apple Watch is in the whoop. Yeah. Oh, yeah. Come on. Yeah. Yeah.

But the part that I am not clear on still is how are you proactively trying to associate more dopamine or serotonin or whatever is released in the brain with

When I exercise more or when I take more steps, how do you make that connection? Yeah, yeah. So there's kind of a fundamental some going back to that idea of the matching law that behavior flows where reinforcement flows is kind of the fun phrase in there. Right. So if I can look at your behavior across the day baseline before I'm doing anything, I can roughly see, you know, you spend three hours on Netflix, 10 minutes walking.

There's a lot of value in Netflix. What's going on there? How can I take something of that and provide it for you can only access it, let's say, through physical activity. One kind of popular phrase for these kinds of interventions is contingency management. Very simple. A lot of research settings.

You know, maybe you'd prefer to sit on your couch, but what if I give you $20 for hitting your step count today? What if I give you $50 a day, $100 a day, right? You can kind of start increasing the value and kind of putting on these extra reward contingencies, they're called, right? So they don't occur in the natural environment, but you can supplement, augment, add on to the reward value for the healthy behavior.

To get it to shift. And then clinically, usually the challenge is, all right, amazing. You can get changed in physical activity for like less than a dollar a day. You get people hitting their step counts. Wow. The challenge then becomes kind of like what you were leaning into a little bit, I think, is

amazing that I'm now walking, but do I now find walking intrinsically rewarding? So you can actually fade out those extra rewards and things I've added on there. Some people that's easier. For others, it's more challenging. There's kind of research ongoing there. That's the basic idea, how you can analyze it and think about it. How are you reaching out to people? How are you setting up these? Do I give you, do I just say, hey,

whatever app it is, you have unlimited, untethered access to all of my data, like the WooFinder and my screen time and everything that I'm doing to help me become a better person from goals that I've set. Yeah. Wouldn't that be nice if you would give me that? That would be great. Honestly, I don't even care about any data privacy, but we could go over how you are keeping my PII clean. Yeah, yeah, yeah. No, and I should say at this point,

Data challenges, a real thing. Most of the work that I get into is in clinical settings. So like behavioral health settings where we have people coming in for therapy. I then know the totality what's going on in that session. Right. I'm presenting maybe different learning trials. They might be called things like that. You're trying to work on, I don't know, improving your speech articulation. Right. Someone that seemed like a speech therapist or something.

Bring them in. I know the behavior I'm trying to change. I have all the data because they're in my context on, you know, when I'm presenting you different words to say, how exactly are you saying the back? If we're working on like different pairings, chains of words,

sounds that you might admit, right? What does that look like? Where do you start to make mistakes and things like that? So much easier to get the data that you need in the types of clinical settings that, that I'm kind of working that like pro health behavior, daily life. Um, I've set up a few systems for myself just with, uh, my whoop time tracking, uh, screen time and things like that. Um, you know, maybe there's a product played down the road. I don't know. I'm a researcher scientist first and foremost, but, um,

Yeah. The data challenge is a real deal. But most of the stuff, yeah, it's a clinical setting. It's kind of whatever. Well, because if it does feel like what I want needs to have such a 360 view of who I am. Yeah. And it needs to understand all of my habits, the good, the bad, the ugly. And I think about how could I have a product in my life that has almost like God mode? Yeah, yeah, yeah, yeah. That's exactly right. Yeah. Yeah.

And you're doing it from your side, which is almost like the only way that it can happen, right? I know I've had friends who would plug into their Oura Ring API and then create different scripts that could run off of that.

Which feels like the only way it's like very hacker-esque right now. Oh, it's incredibly hacker-esque. Yeah. And that's kind of what I've done in my own life too. Going back to this idea, like the matching law, right? So I've been for 15 years tracking, wake up to when I fall asleep. How do I spend my time? Wow. Whoop data, right? Same thing. I have custom scripts pulled in Strava and all that fun stuff. Peloton, um,

But it's hard. I know there's still gaps in the day that are missing. And this is where, again, I get really excited about what kind of what's possible five years from now, 10 years from now, and how so much of this stuff, right? You think physiological data, that's not really an LLM call. It's not like text-based. It's just raw, you know, physio tensors make this beautiful to play with. And so it's, yeah, but you're exactly right. It's God mode, right? How do you get...

Get data on enough stuff. Maybe you don't need data on everything, but on enough things from my life that I can impact those things that are most meaningful to me. When, you know, when I think about what are my values, like, what does it mean to live a good life? Yeah. Well, especially if you're like me and you try and set goals, I'm always trying to get better.

And I imagine if I knew a lot more about what you know about, I would be better at giving myself the rewards to create those habits. Like reading atomic habits was not good enough for me to become the perfect man that I'm trying to be. Yeah. Yeah. Oh, sadly, that did not work. But if I, if there were the possibility of an app to be able to understand me,

And have access to everything. And then help me.

with the goals that I'm trying to get. Like just a simple thing that I'm trying to do these days. And I find it very difficult because it's breaking habits and putting new ones in is read before bed. I read when I wake up really easily, but reading before bed for some reason is really hard. I would prefer to scroll TikTok. Yeah, that's fair. I mean, who wouldn't prefer to scroll TikTok? I can't fall for it, but still like,

I feel like I could do it. Oh, yeah. And I mean, there is a whole body of scientific literature called self-management. It comes from the same behavior science literature. Rather than having others impose these rewards for us, how can we do it for ourselves? And there are a bunch of strategies, like off-the-cuff someone might throw out. When you go to bed, rather than have your phone next to you with your alarm, it's across the room. Oh, yeah.

So I set my alarm, I set it down. And then that even that increasing the effort to go get it to then scroll TikTok probably going to prevent me, especially if my books right here. Right. So like setting that differential effort is like one strategy to make things, you know, most people choose the less effortful option. Oh, that's great. There's a whole rich literature. What's kind of fun about it for me is going back again, like the say do correspond, all that kind of fun stuff.

We've known about all this stuff for decades. Implementing it is hard. Getting data on your own behavior can sometimes be hard. And we also don't always, we're not always aware of the reasons why we might pick that habit we're trying to break. You know, I want to do something different, but I find myself doing the same thing. I also have a sweet tooth every night after dinner. You know, I can't break that one. But, you know, you start collecting data and analyzing your behavior over time and you can start to understand.

ah, you know, it's X, Y, Z. These are the reasons why TikTok is so appealing to me before I go to bed. Yeah. And so how does this play into AI? How are you using AI for these type of insights? Yeah. So again, going kind of back to the clinical space, because that's where I spent most of my time. I've done some of this with my own kind of quantified self data, but more in the clinical space.

So you imagine you have a lot of this data, again, going back to this idea of clinical decision making. There's a lot of stuff that's come out. I'm not sure if you read the book Nudge. Long story, Kat Sunstein and Richard Thaler, I believe. What was it about? Because I feel like the basic idea. Yes, this pocket of research, behavior science, a slightly different area, but similar ideas called behavioral economics.

Basically, the idea is that, you know, traditional economic theory, all humans are rational agents. We optimize, we always choose the best course of action. And that's, you know, when we make decisions, we're always optimizing, maximizing.

But, you know, you look around at humans. I don't know. We smoke. We drink way too. We make a lot of suboptimal decisions is like the fancy phrase. And so the question this book nudges, you know, what are all the ways that we might nudge ourselves to move, quit making these suboptimal decisions to make better decisions for ourselves and whatnot? Yeah.

That you can imagine that's also been applied to the clinical decision making literature. Right. There's one study, 2016, third leading cause of death in the United States was physician errors. Right. They made the wrong choice. Somebody died. Yeah. So.

You know, going back to this idea, we have a lot of data. People are in these clinical context, healthcare professionals. They want to do the right thing, but they may not always make the best choice and they may not always be aware of all the data and the things around them that could inform their decision. So where AI comes in and the things that, um,

that I get excited about. And we have one thing that we built that's patent pending. You can start, you can imagine I have this data. One thing, first thing I can do, unsupervised machine learning, right? All these different patients. Let's just start finding different kind of patient groups, patient cohorts, patient profiles, whatever you want to call it. Persona and marketing, same basic idea, right? Different patterns of clinical presentation that might then inform how I intervene, how I prevent some kind of clinical intervention. Yeah.

So unsupervised machine learning comes in there, allows us to identify really interesting groupings. Next phase, usually in some kind of healthcare setting, I'm trying to maximize patient outcomes. Classic supervised learning task. All right, I have this kind of a patient profile. Given everything I know about their clinical presentation and where I want them to be,

What can I do clinically as a therapist to help move behavior in that direction and not in some other direction? Again, clinical decision making, helping make optimal decisions versus suboptimal decisions.

And even those in and of themselves, you can imagine you're starting to probably wrap around all sorts of different ML models, chaining ensembles together. Most people, when they enter some kind of therapeutic context, it's rarely one goal that they're working on. They may have 10, 20 rates. Now you like hierarchical things. You have decision trees. I mean, it gets incredibly exciting. But this is what, you know, coming back to the kind of main theme that we want to start talking about is.

There's so much complexity there. And I also need this thing to be controlled, compliant, transparent, all those fun things that you don't necessarily need an LLM for. Probabilistic outputs is also dangerous in healthcare settings, right? I need it to be discriminative AI. It needs to be very good at what I'm asking it to do. Yeah, and that's a lot of the work that I spent my time doing the last 10 years or so. I imagine there's a lot of room for folks to come in and either...

say that they've changed when they haven't or, uh, they have changed. It's just, they changed for a week and then they fall off the wagon again. Yeah. You like weed those out. Yeah. Yeah, absolutely. Um, so for the first one, for those that may say they, and they may generally believe it, right. I feel I'm a new man, right. I'm a new person. Um,

Getting data on that behavior. That has been me, I will say. Like, just so we're clear. Yeah, yeah. I'm off my sweet tooth kick, right? Yeah, I started ice baths and now I'm a new man. I'm reading before bed. It's all good. Yeah.

Yeah. And yeah, and so in that context, you know, therapeutically, if I was your therapist, love that you're telling me that every day, though, I'd be asking you to give collect data on your behavior in some capacity, or we'd figure out a way. So you could say that, but then we'd look at your data and say, well, but your behavior like that was a spike.

your, your behavior hasn't quite changed to the level that you were after or whatever. So that's kind of one way. Um, the other thing that you're talking about is this idea of a popular phrase that, uh, people might be familiar with. It was like relapse, right? Yeah. I got better than I went back to my old patterns of behavior. Um,

This is a classic behavioral pattern studying in the behavior science literature. You kind of think of like almost like a time series forecasting challenge, right? What are the variables in the environment that when they combine allow me to predict that you're going to relapse versus instances that you won't?

And then you can start again. Same idea. You need the data. But I can start saying, again, amazing that you said you took ice baths. You have this pattern of behavior in your history, Demetrius. I can tell you're going to relapse next week. You know, let's let you stay on the straight and narrow or add in these additional therapeutic resources or whatever to get you over the hump this time. So you won't relapse or whatnot. It reminds me a little bit. Do you ever feel like it's a little minority report?

Oh, yeah. Absolutely. So, yeah. So full, full confession at this moment in life. Like I'm a kid of the Matrix era. Yeah. You know, kind of came up Matrix era, Minority Report, saw those ex machina. All that stuff was going on right when I was in my Ph.D. for behavior science, starting to work in A.I. And I was.

I mean, the idea is incredibly intoxicating, right? And I think what's also interesting is if you look at a lot of tech companies, arguably they're not to that same full extent, but they're using these same ideas, right? How can I make my product more addicting to keep your eyeballs on it?

What we're talking about is like we can do that exact same thing, but to help people live healthier, happier, better lives. We can use it in kind of both ways. But yeah, I mean, I think that future, that Minority Report future is coming at some point. And I'm 100% behind you on I would much rather have

Yeah. Oh, absolutely. Yeah.

I really like that idea that you kind of mentioned of like, you know, maybe turning some of this system randomly, turn that into a product and give it back to the user to say, hey, connect in what you want. You know, I'm not going to, and you, I mean, technology's got, we could run stuff on the edge these days. Like I don't need to collect your data or save it anywhere, but you'll give that back to the user and say,

Connect in whatever you want to get the data in there that you need, state your values, and then just build a model on Demetrius' behavior, right? Yep. Build a custom model for you and nudge you, recommend you. There's a whole body of literature called Just-In-Time Adaptive Interventions. Basic ideas, you know, at that moment of choice, right before you make some kind of unhealthy decision, you get a nudge or a prompt or something to make you healthy or all-in-one. So basically, when I am...

Right about to grab my phone before bed. It's like, oh, remember that book? Yeah. You know what I think about a lot as something that I want to start doing is just before I read a book to my kids for bed.

I shut off the phone and just the act of having the phone off and then having to restart it is enough of a barrier for me to say, ah, fuck it. I'll read a book. Yeah. Yeah. Oh, I love it. Yes. You're, you're already thinking about this idea. If I increase effort or the delay to the reward, classic examples that we know reduce its value. So, you know, what's more immediate, lower effort. We tend to choose those things. So yeah. Yeah.

put the carrots at the front of the fridge with the cake behind it, you know, and I'm still going to go for the cake because it's just so good. And so I'm constantly trying to figure out those efficiencies. And I imagine you start to see areas where you can do more. And, and just from me talking to you, I really appreciate this because I recognize that

One thing I want to do, and it's that do say dissonance in my own life, is I constantly tell myself I'm going to go to bed early and before I go to sleep, I'm going to read a few pages and then go to sleep and nail all my sleep score goals. Right. Oh, yeah. Yeah. Be the perfect man. And that only happens when I'm traveling on my own, when it's at home. It's a disaster usually. Yeah.

And thinking about it in the way of how can I set up more friction is something that I didn't necessarily actively do. It was more that I would do it

in a way that I just recognized, oh yeah, like I don't sleep with my phone in the room. I sleep with it outside in my living room. And that's one thing that I make sure to do because I recognize that I don't look at it in the morning and I read in the morning, right? And so doing it the reverse way and just shutting it off before I read my kids a book

Yeah. Is setting that friction consciously and then like regaining a bit more of my willpower. Yeah. Oh, I love it. The other thing I think that's interesting that you said is that it's easier for you to do when you're traveling. So the other thing that I'd be curious about is like,

what happens like as you're leading up to the end of the night that makes TikTok that much more valuable? Like, are you, I don't know, like you already jacked up and you're like, yeah, now I also need to get some TikTok. Whereas on the road, maybe it's, you're already calmer. And so you're like, I'm going to get into a book. But yeah, those are most behavior is what we call multiply controlled, right? Dozens or hundreds of things come in to influence it. So yeah.

It's trying to figure out, you know, what are the main things or the things that I might be able to tweak or tug on to get the behavior I want versus the one I don't. Yeah, I imagine it's not only the behaviors that you can tweak or tug on. It's the ones that if you tweak and tug on them, they're going to have a domino effect in what you end up doing. Yeah. Yep.

Exactly right. It's funny you say that. A couple, so I also, I'm faculty at the Endicott College, have some docs here. We're working on the paper right now. This idea, we call it Keystone Contingencies. If you're familiar with like Keystone species and like ecology. So the basic idea there is look at Yellowstone National Park, right? You have some kind of ecological system. There's a species within it that if you were to remove them, like the wolf,

The whole ecosystem would like reorganize and they learn this the hard way and they reintroduce whatever. So we're trying to a bunch of other areas of played around the same idea at Keystone, social people in social networks, Keystone actors and the corporate amount where we're playing around with this idea of Keystone contingencies. It like in your own life, same idea, right? Is there one behavior thing that I might change that has this ripple effect throughout the rest of my day? And for me, like it's running in the morning. I'm,

If I run in the morning, I drink less alcohol the night before. My diet tends to be better, more focused at work. And it's like that one thing, if I can tweak that, more a better day. It's a keystone moment. It's like this pillar. I also think there's a lot to be said for how you end up seeing yourself and what you identify as. You identify as a runner now. Oh, yeah. It is something like, well, if I'm a runner, I got to run.

Yeah, yeah, yeah. Oh, fair. Yeah, yeah. And if you're a nighttime reader, you're going to have to read for the night. That's exactly it. You end up doing these things because you identify as that. Yeah, yeah. I agree with that. I can't remember where I heard that, but it was in something. It might have been Atomic Habits or it might have been some other habits book on...

how it's much easier to create a habit if you identify as that type of, Oh yeah, sure. Whatever. Like if you're, if you're a smoker and you identify as a smoker, it's a lot harder to kick the habit of smoking because it's like, yeah, I'm a smoker. Oh yeah, definitely.

And there's some really decent literature suggesting that even kind of like at the same thing, the language that we use can add value to activities or whatever, which kind of fits the same idea, right? If I call myself a smoker, then it adds more value to the cigarette in addition to the nicotine. As you know, if I'm a non-smoker, but I happen to hit it, same nicotine, same stuff going in. But just that bit of language can change the reward value, which is kind of crazy to think about. I mean, humans are nuts, but.

Yeah, and this is also something that is fascinating to look at through a machine learning model. Oh, absolutely. Incredibly fascinating. Are there certain ways or certain sentence structures that for you tend to encapsulate you in certain behaviors? Thinking about the sentence structures and analyzing the way that if somebody comes in

and they are speaking in certain ways, is that indicative of certain actions? Yeah. Oh, yeah, definitely. And there's a whole body of research called framing effects that kind of studies this kind of stuff. Yeah, no. And I think the other thing, go back to like how AI, ML, I'm personally just enamored with unsupervised machine learning. This idea that I don't know what I don't know.

And it's going to like tell me stuff like that's just incredibly intoxicating to me. So what I think kind of going back to this too, where AI would be interesting is, you know, why you come in just like talk about things that you feel you do well, things you don't do well. Can I analyze that behavior, understand like your frame of reference, your perspective. And then, you know, there are some cultures, they have different colors for white that I've never perceived. Right. Just this idea that AI can wrap around language as a, a human phenomena and,

And then can I bring that into that maybe therapeutic setting and say, hey, here's how you talk and perceive the world. Here's an alternative perspective that you may not have even known existed that may help you. Like maybe we can get you to frame things, you know, in this different way. And then we can again go back to the data. Does this actually change your behavior? Yes, no. Probability of relapse, like all that kind of fun stuff. Yeah.

It almost feels like, and this is where I'm drifting into the AI hype territory because you got to be really careful as you are doing this. But I know there's a lot of people that will talk to OpenAI's real-time API or like the voice component. It's very shaky and anybody that's used it knows that it's not amazing.

But I see a potential world where you're sending a voice memo and you're just blabbering on with a few different prompts. And then that gets analyzed. And potentially you can do it with an LLM at first, but then you get more specific and more...

fine-tuned models, fine-tuned for you and what you're going through, I guess, if it's needed. Yeah, absolutely. Have you seen the movie Her? Yeah. Oh, yeah. I feel like that's kind of, you know, it's the thing that's with you that hears your language power, sees what you see, and it can, you know, tap into the collective conscious, but then build models specific for you and kind of what you're after. And then again, you know,

From my angle, we can, there's no reason we can't do that and help everybody live happier, healthier lives. It doesn't always have to be. I mean, there's the downside to any technology. You know, if this exists, bad actors will do what they do. How are other ways that you've seen and you like

what you're seeing with unsupervised machine learning. Oh, sure. So another kind of similar flavor, but now in the education context, because I work, you know, higher ed teach classes, things like that, but same idea with students. And I did some work with a company called Glimpse K-12 when I was doing my data science postdoc, but same similar idea where I can take patterns of behavior in educational settings, assessment scores, whatever, and

From that, identify kids that may need more resources to be successful in a classroom setting. And, you know, if you're a teacher or whatever, you have 30 kids, 20 kids, 30 kids. It's hard to give everybody what they need to succeed. And so if you can use any kind of unsupervised machine or any kind of method to identify learner types, student types, whatever, to then mash kids up with resources so that they have a higher probability of being successful, learning the things they need to learn, you know,

personalized systems of instruction are another big thing, right? Rather than every kid getting the same per, you know, set of tasks on the chalkboard. You know, Johnny's going to get one set of math problems. You know, Elizabeth's going to get another set because they're just different students, different skill sets, different resources, but. Yeah. Different ways of learning. Yeah. Actually back when I used to live in Spain, I was teaching and you learn right away that each student is different.

very different in how they learn and what they need to learn. And some are visual, some are auditory, some need to speak and all that. And I come back to the idea of we see so many or we hear about how

Small groups tend to be the most effective when it comes to teaching or just one on one. And then you can be very personalized. And so how can we, as you're saying, create modules that are personalized for each individual that are hitting their needs?

instantly I think about how do you know, how do you get the data? How do you have the context to know what each individual needs, right? Because if I'm just coming into school on day one, you don't know that I like to listen more and I learned through auditory methods. Yeah. Assessment scores is the, I mean, the only thing that a lot of places have right now where I've seen, and even some of my own work where it's been most successful is

online learning platforms, right? Canvas, some of those places where I have a bunch of student behavior captured in a browser. Where are you logging on? What are you engaging in? How long are you reading something? When you submit an assignment, what's the quality of that? What grade do you get back? What are the feedback the teacher is giving you? Those kinds of things. But yeah, again, this is where kind of going back to the, I know some of the things that I've thrown out maybe in the MLOps community,

Love that people are building some of these crazy systems, but a lot of the stuff we've talked about today are data challenges. And like, how can I start getting the data I need for some of these very important real world problems to then even, I mean, let's, we can just start classic ML. We don't even have to get crazy. Like, let's just do something to help out some of these kids. Cause you know, a lot of times they, you know, fall through the cracks.

And you have a tough start in kindergarten. It can cascade through elementary school and may be hard to catch up. And it is so...

true that it's a data challenge in that if you're in a class, you're a teacher in a class with 30 kids, how are you getting the data to know what these folks need in these moments? And so it makes sense that if they're interacting with a website, you can get much richer data. Or if you have them do these assessment tests, that's a great start too. But

I feel like, yeah, you got to constantly be capturing that data. Oh, yeah. And I mean, don't get me wrong. There's some really cool research going on, um,

you know, I put cameras in rooms. Oh, wow. Uh, right now I'm tracking what is every kid doing throughout the day? What are they getting exposed to? What's their behavior? All that kind of fun stuff look like most kids are on iPads these days. So now imagine I can start integrating data from multiple sources. Again, I've only seen this research perspective primarily in, uh, maybe a hospital mental health setting, um,

large public spaces, right? I'm trying to understand where people walk or whatever. So we do have some computer vision technology that I think allows us to at least start playing with these data sets. I've only seen this stuff research-wise, never, haven't really seen a product, right? Nobody offering, hey, you know, K-12 school, buy my cameras and we'll, you know, do this stuff for you. I don't think it's in their budget, even if they- Exactly. Something like that. But yeah, I think that, yeah, technology's there.

That's got to be the hardest part is like we've got this great technology to help your students, but it's super advanced and it probably isn't cheap. Yes. Oh, exactly. Yeah.

And that's where I think some of the people in the MLOps community, they have these skills, work incredibly smart, talented, working on stuff. And I know half the challenge like we mentioned is can I get the data? Can I make the ROI worth it for a product to be built? And I have to imagine people out there in the community have the answers. It's just, you know, get under. Get that data. Get that context. I really like that idea of like,

How can we get more data to get more context for the kind of stuff that is going to make an impact? And it doesn't necessarily need to be

Context for your LLM context window. That's right. Yeah, yeah. But I think I actually love you said that because I think if you look at behavior science literature, whether it's a human, whether it's a frog, an eagle, whatever, we understand behavior by understanding that larger context within which it behaves. And there's a great analogy with LLMs, right? The more the LLM understands, hey, this is what I need you to do. And like, here's the information you need. Better output.

Same is true with humans, right? It's just, you know, we talk a lot. We love language. And so often we default to talking with people. Why do you feel that way? Why are you doing what you're doing? But there are other ways to understand language.

why we do what we do that are often more accurate, but more data intensive. Yeah. Requires people like the people in MLOps community to run those through algorithms to extract the insights and put it in front of a teacher who's not going to be coding in Python, right? They just need their dashboard or something. Exactly. They need that product, that end user product. Exactly. Yeah, yeah. Yeah.