Welcome to Practical AI, the podcast that makes artificial intelligence practical, productive, and accessible to all. If you like this show, you will love The Change Log. It's news on Mondays, deep technical interviews on Wednesdays, and on Fridays, an awesome talk show for your weekend enjoyment. Find us by searching for The Change Log wherever you get your podcasts.
Thanks to our partners at Fly.io. Launch your AI apps in five minutes or less. Learn how at Fly.io. ♪
Welcome to another episode of the Practical AI Podcast. This is Daniel Whitenack. I am CEO at PredictionGuard, and I'm joined as always by my co-host, Chris Benson, who is a Principal AI Research Engineer at Lockheed Martin. How are you doing, Chris?
Doing great today, Daniel. It's a beautiful spring day here in Atlanta, Georgia. And I got to say, the flowers are coming out. It's a nice day to talk. They're probably distributed all over the various lawns. Everywhere. Federated even. Yes, yes. Well, Chris, this reminds me, last week,
Last week, we had a kind of part one intro to federated learning and some details about that with Patrick from Intel. He mentioned recently that he was at the Flower Labs conference and the Flower framework around federated learning. He mentioned quite a few times. Well,
We're privileged to kind of carry on the conversation around federated learning into a kind of part two on the subject because we've got Chong Shen with us, who is a research engineer at Flower Labs. Welcome, Chong. How are you doing?
Hi, I'm doing very well. Thanks for having me. Yeah, yeah. And actually, we were talking before the show. This is the second time that we've got to chat about Flower on the podcast back in 2021. So even, you know, before AI was invented with ChatGPT, apparently we were having conversations about AI and one of those
was with Daniel from Flower. That's episode 160 titled Friendly Federated Learning. It took me a second to say that one, but I'm sure a lot has changed and updated and advanced in that time, of course. Maybe just to start things out, Chong, could you give us a little bit of a context of
of your background and how you got kind of introduced to this idea of federated learning and eventually ended up kind of working, working with Flower? Yeah,
Yeah, absolutely. Well, thanks again for having me. So my background is in computational physics. So I spent many years working, doing research in the computational physics field, both my PhD and postdoc. So I worked a lot on parallel computing on supercomponent clusters.
I was also very interested in machine learning and deep learning in general. So when I pivoted away from academia to go into what they call industry, there was this space where you have distributed learning. So that was in 2021. So when I started my career back then, it started as a sort of a
data science consulting business, but specializing in federated learning. And I saw lots of projects that were very interested to adopt federated learning or this distributed learning approach to solve some specific problems that they have. But I also came across the Flower Framework and open source development is a big passion of mine.
So being able to develop a framework that is used effectively with a very permissible license, I think it's a pretty cool thing to do. So that's why I decided to join Flower Labs and become a core contributor to the framework itself.
Yeah. Yeah. And I'm, I feel, uh, uh, already, already connected with you because, uh, my background is in, is in physics as well. It's always good to have other physicists on the, on the show that have somehow migrated into, into the AI world. I'm wondering in that transition, like,
You mentioned this transition kind of academic to industry. You were getting into even consulting around federated learning. Was that idea of federation or distributed computing or however you thought about that, was that kind of a key piece of what you were...
doing in, in academia, which led you kind of into that interest or was it something else that kind of sparked the, the desire to really dig in there as you were kind of going into, to quote industry as you, as you mentioned?
Yeah, it wasn't something I came across in academia, surprisingly. But somehow when I stepped into the data science world, I came across people who are looking into it. And that became an approach that back then we sort of adopted to try and solve some problems. So we saw that, you know, federated learning could be a way to solve it. And then it's very coincidental. Okay, it's distributed learning, it's distributed computing. So it resonated with me quite strongly.
Yeah. And was that related to working with sensitive data or in regulated industries or something like in those, you know, consulting projects or, you know, just, yeah, interested in kind of that progression? Yeah, actually, there are, I would say, two broad categories. One where the data is incredibly sensitive and workable.
we usually refer to them as really siloed data, data that should not absolutely leave the boundaries of where it was generated. And then the second group or second cluster is the problems where the data sources are so massive, the point at which the data is generated
generate so much data every second of the day that they just can't do any useful or meaningful analysis on this kind of raw data and you have to do a lot of downsampling
So they try to look into pushing competition to the edge and trying to see if they could apply some sort of machine learning approach or deep learning approaches on this sort of massively generated data without needing to downsample them. That makes sense. And yeah, I guess I should explicitly mention as well that in the kind of part one of this two-parter with Patrick, Patrick did provide a kind of
detailed introduction to the idea of federated learning. And we discussed that at length. So if people want to go back to the previous episode and listen through that, that may provide some context. But it probably would be worthwhile just to give your audience
sort of 30-second or couple-minute view on federated learning and how you would describe it at a high level, and then maybe we can jump into some other things. Sure, absolutely. The easiest way to think about it is looking at your classical machine learning approach, right? Classically, you need to bring all the data into a single location, think of a database or on disk, and then you train your model on that data.
But sometimes it's not so easy to actually bring all the data into one location.
just because of the privacy reasons about moving your data, some geopolitical considerations surrounding it, and also the data volume that's been generated. So instead of bringing all the data to one spot and training a machine learning model on that, what you do is you move the machine learning models to the point at which the data is generated, and then you train these local machine learning models at these sources.
Then instead of moving the data across, you move the model weights to a central server, which is much, much smaller. You can then aggregate the model weights to learn from these various data sources.
And then over time, as you repeat many, many rounds of this, you end up with a globally aggregated model that has learned from this variety of data sources without needing to move the data source across. That's the essence of federated learning. I'm curious, as you guys have worked on the framework and you have new users coming into it, what usually prompts...
a typical user from your perspective to move into federated learning? Before they're really fully into it and they understand the benefits and they're sold on it, if you will, what's usually in your experience the impetus that kind of gets them into that mindset and kind of drives them in that direction initially? What causes the change in the way they're thinking? So they go, I definitely need to get into federated learning and go use Flower specifically. Yeah.
Yeah, absolutely. I think from my experience, the biggest driver is when they realize they can't move their data, right? But when they speak to all the parties involved, they say, oh, I have this data set. Oh, you have this data set, but I don't really want to share them. And then, okay, this is where, you know, federated learning or FL comes to the picture and decided, okay, we really need to do this. This is one aspect of it. And the other aspect is when there's this big company who has, you know,
let's just say many, many data sources. They say like, okay, it's super difficult to coordinate all our databases together so that we can have a cohesive way to train the machine learning model. And this is also when you can, when you try to look for all distributed machine learning systems, then you realize, oh, they come across very different. So there's these two, so these two vectors that drive the typical use cases.
I'm curious if I can follow up on that because I have a personal curiosity. I happen to work for one of these big companies that has data in lots of different places. And in addition to that, and we kind of in the previous – last week when we were talking, we talked a little bit about some of the privacy issues as well. I'm curious what you think about this. Like in our case, and we're not the only one.
Lots of that data is stored at different levels of security and privacy. There are different enclaves, if you will, where you're trying to do that. And how does that ramp up the challenge of federated learning when you have different programs?
you know, different security concerns around the different data enclaves that you're trying to bring together through federated learning? How does one go, instead of saying all different locations for distributed data are equal, when you're dealing with different security concerns, do you have any ways of starting to think about that? Because as I come into it as a newbie on this, that seems like quite a challenge for me. Do you have any advice or guidance on how to think about it?
Yeah, yeah. I think this is from my experiences that the complexity of the solution scales with a number of data stakeholders involved. And when you mentioned about different levels of the enclaves, that to me sort of signals that there are many data owners who manage their data a bit differently.
So, the key to solving that is to harmonize the data standards first, to be able to get on to a federation.
And then from then onward, the implementation becomes much, much easier. I think it's one of the key things that I've seen. And we've kind of talked about your background, the sort of introduction to federated learning, some of those motivations. Maybe before we get into Flower specifically and some of the more production use issues,
you know, from your perspective as kind of being a central, you know, you're at a kind of central place within the ecosystem of federated learning, I guess. What at a, you know, just very honestly sort of, because we had that last episode in 2021, you know,
From 2021 till now, how is the state of adoption of federated learning in industry different maybe now than before? How has that grown or how has that matured as a kind of ecosystem, I guess? Yeah, it's a very good question.
If I were to put a number to it, and this is really arbitrary, I think there's 100x difference from 2021 when the flower framework sort of existed and now. And one of the key things
changes in the usage of heritability is the ability to train foundational models and large language models. And this has been a significant change and driving force. So previously, when we talked about using the flower framework, you may be confined to models that are not super large, small by today's standards of the order of millions of model parameters.
But these days, when we're talking about making use of text data, image data for these foundational models, you are thinking about models at an order of billions of parameters. And there is a fundamental change in also how we have structured the architecture of our framework.
and also to increase the ability to stream large model weights. So all of these things are happening right now as they speak, and there's some exciting new progress. Hopefully, we release a new version in a couple of weeks. And for the users, the usage is identical. Nothing has changed. But what has been unlocked is the ability to then
train very large models. So all of this really increases the appeal of using federated learning or the flower framework for a larger variety of use cases.
You know what's beautiful about good code? It just works. No fuss, no five-hour debugging sessions at 2 a.m. That's exactly what NordLayer brings to business security. While you're busy shipping features and fixing bugs, NordLayer handles your network security in the background. It's like having a senior DevOps engineer who never sleeps, never takes vacation, and never accidentally deletes the production database. Zero trust architecture? Check.
Thank you.
Apparently, it works on my machine is not sufficient for the auditors. The good news is our friends get up to 22% off plans plus an additional 10% with the code practically 10. That's practically dash 10. That's less than your monthly GitHub copilot subscription, but infinitely more useful when the security team comes knocking. Check it out at NordLayer.com slash practical AI. Again, NordLayer.com slash practical AI.
Well, Chong, as we've kind of dived into the show and we've already started making reference to FLOWER quite a bit, but we haven't actually really described specifically what FLOWER is in detail as a framework and what it brings and such as that. Could you tell us a little bit about that?
Could you take a moment, and we probably should have done this before, but maybe kind of express exactly what Flower is, what the components are, and how it kind of helps the user begin to federate their data in terms of what their workflow is. Could you talk a little about kind of the basics of it? Yeah, absolutely. So the Flower framework is our flagship platform
open source code that's built on the Apache 2.0 license. And this framework allows users, any data practitioners, to build a federated learning solution. So with the framework, what this means is they are able to, I guess in code terms, install a basic Python distribution of flower and to build
different apps that allows you to construct the fundamental fairytale architecture. So what it means is to be able to spin up
your server which aggregates the model parameters and to write the code to also do your training on the clients. The structure that we provide within the framework allows users to follow the same reproducible way to perform their accredited learning. So I think at the essence, this is what it is. What I also wanted to say that one of the appeals of Flower for me personally,
is that we really emphasize the user experience, which is why we always say, Flow is the friendly federated learning framework. We want, we prioritize the experience
of all our users. We support them on Slack. We also have a discourse channel called Flower Discuss where we actively answer any of the questions from users. And we also have a fantastic community that has contributed a lot of code improvements to the core framework as well. So we are completely open. We're built transparently and really accountable for every single line of code that we commit to it.
you know, to the highest standards. Yeah. And I, I can testify personally. We, we at prediction guard, we work at work with a number of students over time at Purdue university. They have like capstone projects. We're in the same town. So it's natural that we would, you know, work with, work with some of those students. We've done that a couple of times now. And one of those student groups that we had is,
I believe it was last year, actually did this sort of capstone project related to federated learning and training language models, translation models, and trying various things. And they evaluated a bunch of different things, but I think ended up using Flower for the reasons that you mentioned. So they were newbies into this world of federated learning. Obviously, very smart students, no doubt there.
But they but they definitely gravitated to the user experience with with Flower because, you know, they had programmed in Python and and it just sort of came came naturally to them. So, yeah, I'm sure that's a common experience that that maybe you all hear from from others that sort of natural Pythonic way to kind of approach these topics.
Yeah, yeah, we do. Absolutely. I'm very happy that you shared that experience. It's good to always hear feedback from the community. But yes, Python being the really...
driving language behind machine learning models and deep learning models right now. So it's a really natural way to provide a Python SDK. We've supported it from day one and we will continue to support it for a long time. I'm curious with the kind of extending that just a little bit beyond being in the language. I like the notion of the friendly language. The word friendly appeals to me in terms of
that user experience. Could you talk a little bit more about kind of why your branding around friendly and what that means from a user experience standpoint? You know, what other aspects of it make it friendly? There are so many things out there that are not friendly that that definitely...
That definitely grabs my attention. Yeah, absolutely. I think what would be nice to explain is for the past 10 releases, we have dramatically improved the friendliness of our framework. Hopefully, I hope that's the experience that people will get out of this. The main point is to reduce the cognitive load of any developers who want to use our framework.
So I'll give one concrete example. We introduced the Flower CLIs a couple of releases ago, I think probably late last year. And what this does is with a simple FLWR space new command, N-E-W, you
A user is able to navigate these options through the command line and immediately have a templated project to work with for federated learning. And it runs out of the box. After Flower Neo, the user just follows the steps and then you do fwwr space run and
and it runs out of the box. And we have the core templates that is necessary for users to build on. We have the PyTorch, TensorFlow, the typical ones, and the more exotic ones. You have JAX, and those who want it, they can use NumPy as well. All of these provides the boiler code for use to get started with, and it reduces so much startup time. Then with that, once a user has built all their applications, the user can also...
really monitor their runs. We also introduce commands like fwwr space ls. It's really like ls in your terminal to just see what runs, what flower runs are running at the moment. And also others like fwwr space log to see the logs of your code. So all of these really simple CLI tools really help a user understand
navigate and work with running code much more easily. Previously, I would say, you know, 2021, 2022, early 2022, the flower framework was in a different place. How it worked?
Back then, it was still friendly. But the way that a user would need to start the federation would be to start three Python scripts. And this is not as intuitive or natural if you want to scale up or put into production.
So with the introduction of the Flower CLI and a different way of deploying the architecture which drives the federation, it really makes it so much easier for users to start building and then deploy the code. Well, you were kind of leading into maybe what was going to be my next question. You mentioned kind of taking things into production. So some people might hear kind of,
friendly framework, which is a good thing, as Chris mentioned, but they might associate that with, you know,
prototyping and learning and that sort of thing, not necessarily production usage. So I'd love if you could kind of help us understand what does a, if I'm implementing a federated learning system with Flower, what does a production federated learning system look like? I'm sure there's different sorts of
you know, ways that that could manifest. But certainly you've seen a lot of use cases. Maybe you could just highlight some examples for us. What does that production federated learning system look like? And what are some of the considerations that you have to think about going from kind of a toy prototype of like this might work to a full scale production rollout?
Yeah, absolutely. I think it is a nice segue between the fairness aspect and moving to production because what I also want to mention here is that
I walked through a very simplified workflow of how a user would build out an FL solution. With the flower framework, you could build and write the apps that you need for your aggregation, your server aggregation, and also for the clients, which actually train the models at the data sources. In the first iteration, a user might actually
actually run it in what we call the simulation runtime. So without worrying about the actual data sources or to work out the data engineering aspect of it, you could test the implementation of the basic architecture in the simulation runtime using datasets that are obtained from Hugging Phase, for example, or from datasets that you could just create artificially just for testing purposes.
With the same code that you use to train the models and the clients and to aggregate, you can then point the code to a different runtime and then execute it in what we call the deployment runtime. And this brings us one step closer to production.
So once you have this mode of execution, the clients would then be tapped in to the data sources and you can then start training your actual federated model. So what does it take to deploy
So firstly, there is a nice acronym that I like to use from the TinyML community. It's Blurb. I'm not sure if you've come across that before. Have you come across that before? Yeah. But go ahead and explain. Yeah. So the TinyML community talks about bandwidth, latency, efficiency, reliability, and privacy, if I'm not mistaken. I could be wrong with the last one. But Blurb.
In a production-grade system, what you really want is the reliability of the deployed solution to do the full computation. It doesn't have to be federated learning, but systems in general. With the current version of the flower framework, we have separated what we call the application layer, where users will build apps, and these are the ones that users will modify.
And then we also have the infrastructure layer, which underpins this system. This infrastructure layer is responsible for receiving the flower commands from a user and then to distribute all the necessary code to the clients to actually perform the training.
So in flower terms, you come across it, but we call this the super link to actually host the server. And the super nodes are the long running services which basically orchestrate the client's
So these two components are long running. So with these two components, because they are long running, the users can then run multiple and execute multiple federations across other systems without worrying about any of these components failing. So this is where the reliability comes into the picture.
Because the connections are also established, we also handle the bandwidth and the connection. So we try to reduce the latencies between the supernodes and the superlink as well. So the infrastructure is something that's being deployed once and that will persist for the lifetime of the project.
And this makes it much easier for users to continue to work with the production grid system. So it's always there waiting for you. Anytime a user wants to go in and execute a run and look at the results, it's always there without worrying about any component failing and stopping the run.
Chris and I are so happy that you are joining us each week to hear from amazing guests and listen to some of our own thoughts about what's happening in the AI world. But things are just moving so quickly, and we recognize and want to see you all participate in the conversation around these topics.
That's why I'm really happy to share that we're going to start posting some webinar events on our website where you can join virtually and actually participate, ask questions, get involved in the discussion, and really deep dive into various topics related to either various verticals or technologies or tools or models.
The first or next of these that's coming up is June 11th at 11 a.m. Eastern Time. It's going to happen virtually via Zoom, and you can find out information about that at practicalai.fm slash webinars.
This is going to be a discussion around on-prem and air-gapped AI, in particular as how that relates to manufacturing and advanced logistics. I've seen personally, as we work with customers in this area, just the transformative power of
of this technology there from monitoring machine generated events to providing supply chain advice and so much more. But there's a lot of struggle in terms of deployment of this technology in those air gapped or in those on-prem environments. So that's what's
We're going to talk through on June 11th. I really hope to see you all there for this discussion. Again, June 11th, 11 a.m. Eastern Time. It's going to be virtual via Zoom, and you can find out more information at practicalai.fm slash webinars.
So Chong, I love this idea of the sort of super nodes and super links. And my thought is, I'm trying to work out in my head kind of, if I was, you know, let's say I'm working in the healthcare space and my sort of
nodes or maybe different hospitals or different facilities in a network or something like that. And I have a central place where I have my super link and I'm doing the aggregation. Just from a practical standpoint, as I think Chris mentioned before, you have these different facilities, you have different maybe stakeholders with different data. What do I need to do as like
Let's say I'm the person that's in charge of running the experiment, training the model. What do I need to do on the setup side to sort of connect in that?
these super nodes or wherever the clients are, what sort of needs to exist there? How do I kind of register them and that setup process to really get going before I'm able to, you know, go in, like you say, and from a user perspective, run experiments or perform runs, training runs and that sort of thing.
Absolutely. So there are many ways to go about it, but I think the cleanest way is to think about two groups of roles. One is the administrative role, and they are responsible for deploying the super nodes in each of these, let's say, healthcare facilities, healthcare centers.
They are responsible for making sure the correct user is registered onto the superlink or the federation and also to coordinate any monitor, basically monitor the usage of the superlink itself. So that's the administrative role. And then there is the user role or data practitioners, data scientists would then write their apps, their server apps and their client apps
and then run these apps on the superlink on the federation that the administrator has deployed. So I think this clear distinction would be an easy way to think about it. So as a start, an administrator would
Say there are five hospitals want to form a federation and administrators can go in and deploy the supernodes with the template. For example, if using Kubernetes or Docker containers, you can have Helm charts that can deploy the supernodes in each of these five hospitals.
The superlink can be hosted by a trusted third-party server, or it can also be hosted by Flower Labs, for example, can host a superlink for you because it's just a simple service. And then the users would register or be authenticated
on the super link. So they need to be both authenticated and have the authorization to run the flow commands on the super link. And that way you can get like a production system up and running in a cross silo setting. - I'm curious as we're kind of talking through it and I'm learning a lot from you as you're describing it, and you've kind of made reference to admin roles and client and server apps and super links and super nodes and stuff.
which, you know, kind of in the context of federated, there's networking and stuff like that. So I guess I have a generalized question around that, and that is,
Is there any set of knowledge or skills that a user can kind of ramp up into or needs to know to use Flower effectively? Like particular, for instance, you know, that maybe they're coming from more of a kind of the data science or kind of, you know, deep learning role.
and maybe they haven't done a lot of networking and stuff like that. Are there skills that they need to be able to ramp up into to be most effective at using Flower that you would recommend? What would the expectation on the user be in that capacity? Yeah, that's a good question. Actually, it's a fair question as well. In my opinion, what we're trying to convey is that
users do not need to think about the communication aspect of it at all. That everything is handled by the infrastructure. Of course, if a user starts to be more
starts to run into when the federated learning solution becomes a bit more complicated and run through very special cases. And this is where some understanding of the communication protocols and how these are set up
this could help as well. And I think for users who are stepping more into sort of administrative role and want to deploy the supernodes or work with the infrastructures, basically the superlinks and supernodes, there are questions of infrastructure slash DevOps. You have to have some familiarity with deploying this in containers or working with pods, things like that.
But fundamentally, when you first start to work with the framework, you can get started with a vanilla production system without worrying too much about the communication or needing to know too much about it. And then as you get your feet wetter, then you can learn more along the way. Well, yeah, that line of thought, along with something that you earlier said about kind of how...
large language models, generative, have pushed the boundaries of how you do communicate data, weights back and forth, how you can handle larger models with the more recent versions of Flower, and you're releasing the new version in a couple weeks, even with more. I'm wondering generally how, certainly that's one aspect of how this sort of boom in generative AI has probably influenced your
you know, roadmap and how you're thinking about things, what people are wanting to do with flower. I imagine there may be a variety of ways that that's impacting flower. I was even thinking while you were talking about that as like, wow, it'd be cool if there was a,
you know, MCP server or, or something, or, or, you know, helpers on top of, uh, on top of flower that I could just, you know, type in natural language. And, you know, that would be a friendly, uh, friendly interface to, to set up my experiments and that sort of thing. So, yeah, I mean, as a, as kind of one of the, the folks, you know, core folks working on the framework,
How have you seen this kind of boom in interest around generative AI influence kind of the roadmap and what you're thinking about at Flower, what you're maybe envision for the future of the framework, that sort of thing? Well, when you brought up the model context protocols at a small Mac phase, there's definitely been some interesting conversations recently as well within the team about looking into that.
about the impact of generative models or large language models slash multimodal models. There's been a, it's one of the driving forces for the flower framework as well. We really believe that
These state-of-the-art LLMs, as we speak, they're running out of data to train. Back in December last year, Ilya, co-founder of OpenAI, you were saying that data is running out. No, data has run out to train these LLMs.
And yes, that's exactly the sentiment that we feel as well. It's the tip of the iceberg. There is tons of data locked in silos that could benefit from having large language models either pre-trained or fine-tuned on in order to be useful, to be made useful. And the way to achieve it is through federated learning. I think this is one of the
key technologies that is driving the framework. I'm curious kind of to extend that notion a little bit.
as we're, you know, we, we've been so into kind of the generative AI hype cycle for the last couple of years and stuff. And, and now we're that that's kind of moving into, uh, combining models in different ways and, and agentic, uh, you know, focus and, and ultimately, uh, physical, you know, uh, models going out there in terms of, of interaction. And so, and I, I know I'm,
what I'm seeing out there involves instead of just having one model, you know, people are now putting lots of different combinations of models together to get jobs done. Does that in any way change kind of how you should think about using federated learning? Is like every model that you might have in a solution just its own one-off
flower implementation or is there any ways that you guys are thinking about combining models together if they're all using data from you know different resources and stuff like that like how as we're moving into my solution has many models in it does that change in any way how users should think about using flower or architecting a flower based solution it's a very deep question I
I feel that there are a couple of possible futures here.
there is a future where these agentic workflows, where you have models that are sort of chained together to achieve a certain task, could also be used eventually in concert with federal development. So I see a future where there is a possibility about that as well. But there needs to be some intermediary steps there. And the reason is because these models, when you use them for agentic workflows,
They need to be really optimized for the agitig workflows. They need to be trained on a certain type of structure and also be optimized for it. There needs to be some proper evaluations for that.
So sort of the missing, I see the future where, you know, if these two sort of pathways of agentic workflows and federalized learning come together, it will be that people should think about like having strong evals
for these kind of workflows. And then knowing that there is a limit to them, once you're able to quantify them, then to look for ways you can improve it through distributed learning, such as federated learning. And this is how you rationalize an improvement over agitig workflows.
Well, Chong, it's been fascinating to hear some of your perspective on, you know, especially production use of federated learning and in Flower. As we kind of draw to a close here, I imagine we'll have Flower back on the podcast here in another couple of years or before. Hopefully this does be a recurring one. But as you look to this sort of next season,
of either what you're working on or just the ecosystem sort of more broadly, what's exciting for you, interesting for you that kind of is always top of mind or is most there when you go to, you're going back from work in the evening, what's on your mind as you look forward? Yeah, absolutely. I think...
I'm very keen to think about this foundation at LLM that is purely trained on FL, on federated learning, and has been shown to be both privacy-preserving and also state-of-the-art.
I think if the viewers and also yourselves, if you check out, we are collaborating with VANA as well in the US. They are looking into data DAOs and we are very much working on that. So I'm really looking forward to seeing the first
element of world that is training in fl way with sort of standards awesome well yeah we look forward to that as well certainly come on the show and give us your comments on it when it when it happens but uh thank you so much for taking time chong to to talk with us really appreciate your perspectives and uh please pass along our our thanks to the flower team and their continued work um
you know, as a team on a great addition to the ecosystem. I will. Thank you, Daniel and Chris. Thanks for having me on the podcast. All right. That is our show for this week. If you haven't checked out our ChangeLog newsletter, head to changelog.com slash news. There you'll find 29 reasons, yes, 29 reasons why you should subscribe.
I'll tell you reason number 17, you might actually start looking forward to Mondays. Sounds like somebody's got a case of the Mondays. 28 more reasons are waiting for you at changelog.com slash news. Thanks again to our partners at Fly.io, to Breakmaster Cylinder for the beats, and to you for listening. That is all for now, but we'll talk to you again next time.