Support for this episode comes from HookDeck. Level up your event-driven architecture with this fully serverless event gateway. To learn more, go to hookdeck.com slash theburningmonk. Hi, welcome back to the end of the episode of Real World Serverless. Today I'm joined by Luciano, who I've known for many years now. I'm sure many of you who are in the serverless world has used maybe even Luciano's project in the past. So hey, man, welcome to the show.
Thank you, Ian. It's amazing to be here. This is one of my favorite podcasts when it comes to AWS and server. So it's really an honor and a pleasure to be here.
Thank you, thank you. And I have to say I really enjoyed the small videos that you've been doing with the guys at 4Theorem as well, the 8-bit sprites, nice and short videos. I like those just on a very specific topic. And I'll put a link to the description as well in case anyone else wants to check it out. You guys have been pretty active, pretty regular with those videos as well. Yeah, I think we are now about 120 episodes at this point.
So we have been doing it once a week for a long while. Now recently we moved it to once every two weeks just because it was too much to keep going. But yeah, we're still trying to be as regular as possible.
And I've also seen that you've been publishing a lot of content on social media about Rust. And I guess that you've gone the full on, you know, gone into Rust. I guess maybe we could, and you've also got a book you've been working on for a little while. And so we can get on to that a little bit later. But I think most people know you probably through your work with MIDI. Yeah.
Do you want to spend a few moments and talk about your background, how you came into serverless and where MIDI came from? Yes, absolutely. So my background has been mostly in full stack web development. I think at this point I have about 15, if not more, years in the industry. And the majority of that has been building websites and web apps using all sorts of technologies from
I don't know, .NET, PHP, Java, JavaScript. I think I've seen a fair share of different technologies. And yeah, JavaScript is probably still one of my favorites and the one I try to use the most and probably the one that I'm the most proficient with.
And if we want to talk specifically about Mead, the story of Mead started probably around 2016. And there is actually a link on the Mead documentation that shows a little bit of a timeline. So maybe we can also link that later on in the show notes if you want. But yeah, in 2016, I was working for this startup. It was kind of a spinoff of one of the main electricity providers in Ireland.
And they were trying to build effectively like a trading platform for electricity for big consumers of electricity. So the idea that as a big consumer, you might be have to buy energy up front, but then there are moments where you have more energy than you need and you can kind of resell it. So trying to create a kind of very dynamic market for electricity consumption.
And it was a really exciting project, mostly because we had a couple of very good managers that gave the technical team a lot of freedom. And of course, when you do that, what happens is that the team might start to go a little bit wild and a little bit outside the common path, I would say. So that was the time where Lambda, I think, was just one year that was published. So it was still relatively a new and exciting technology. So of course, as a team with a lot of freedom, we were like, okay, we're going to build everything with Lambda.
We didn't even know Lambda, but that was the decision made.
So that kind of crazy thing that you might do when a team has lots of freedom. I think in retrospect, it was actually a really good choice. I don't know if I was the manager at the time, if I would have chosen that, but as a team member, it was a great choice. I learned so much about AWS, about serverless, about Lambda. And as it happens, because it's a new technology and you don't really have lots of guidance, there are really lots of people talking about it and providing tutorials and examples.
We learned by just building stuff, doing mistakes, realizing the mistakes, going back, building it again. And that was lots of fun. But I think it also got me to really understand how Lambda works. And eventually we come up with this idea that in order to make our code a little bit more manageable, we needed to abstract it in a little bit of a better way.
And we came up with this middleware engine type of solution, which initially was all internal. So effectively, what we wanted to do was replicate something like Express, where effectively you write your own business logic for a specific handler, but then you might have a bunch of concerns like, I don't know, validation, authentication, serialization, deserialization, that they are kind of repeated code that you have to preform.
pretty much copy-paste everywhere. So in something like Express, what you would do, you would have a middleware that you attach maybe with some configuration and basically try to move out all of that business logic from the core business logic of your specific endpoints. So basically we wanted to recreate something like that. And at the time, I think there were already some solutions trying to wrap
express itself for Lambda, so basically simulating the event coming in, converting it to an HTTP request, then taking the event response, sorry, taking the express response, converting it to an event response, and doing all of that magic.
But we felt that solution was a little bit too heavyweight for what we wanted to do. And also, we had a few cases where the event wasn't HTTP, so it didn't really make sense to use that approach with Express when you didn't have an HTTP event and response. So that's probably the reason why we built our own middleware framework.
And it's something that we used for about one year. I think we were very happy with it. It was working quite well for us. But then after that one year, unfortunately, the company didn't go as well as expected. They didn't get more funding, so it was shut down. And I think at that point, the team was left over with the choice. Like, okay, we see that there is a lot of value in what we built. This middleware engine is probably one of the coolest things that right now we can share with other people in the industry. And thankfully, we got permission to
kind of move it out from that organization, make it open source. And that's basically where MIDI was created. So that was the name we picked at the time to continue this project as an open source project.
Yeah, and I think nowadays the media has got a lot of people using it, and even the Lambda Power Tools are so standardized around it for some of the middlewares they've got. And for the Python version, they've got their own kind of middleware factory kind of thing, but that's more designed around the Python.
And I think I've also seen some other similar attempts for other language runtimes to introduce this kind of middleware engine that can take care of a lot of the common cross-cutting concerns. I think that's something that sometimes I also see people go too far in that direction. I've also seen
some developers discover middleware as a thing and suddenly everything gets shoved into middleware. So instead of having a handler with some business logic and then wrap around some middleware to do some cross kind of concerns, like, like you said, you know, wrapping, um,
wrapping some of the things like, I guess, errors into 400, 500 responses and things like that. And instead, you suddenly find the custom business logic getting wrapped into middlewares. Instead of having shared libraries, shared modules, they all become middlewares and composters so that you don't see anything in the handler. It's just a bunch of middlewares. And it's really hard to kind of make heads or tails of that.
So I've seen that people go too far that direction as well. And yeah, but yeah, MIDI has been a super successful project and yeah, I'm really happy for how far you guys got. And I guess Will has been mostly driving MIDI in the last couple of years. Is that right? Yes. So actually going back to the story, what happens next was quite interesting because
I think I was probably the most excited of the original group of people about MIDI. So everyone else at that point from that company moved to different kinds of activities. I was the one probably to push the most to make MIDI open source. So at that point, I kind of became the maintainer in a way, unofficially, I don't know officially, but I was the one doing most of the work. And
At the same time, I was working for a company that didn't do a lot of serverless and didn't even do a lot of Node.js. So there was lots of Python. And what we were doing in AWS was more in the kind of elastic search space and making all of that scalable. So I was struggling a lot to keep up with MIDI and the serverless innovation while focusing most of my day time on other topics. And I think I kept going for about two years. And then I felt like I
I was kind of burning out, there were so many issues that I didn't have time to answer, and I was feeling a little bit guilty. I think that's a common thing in open-source maintainers, that you create something, you think there is a value, people love it and start to use it, and then you cannot keep up and you feel responsible for that. So I remember I wrote a very honest issue on the repo saying,
look, I don't think I'm the best person to keep going with this. I feel like I'm burning out. I'm not doing the things that I would expect a maintainer would do for this project. So is there anybody else who wants to take over? And at that moment, Will, who was being one of the main contributors at the time, stepped up and said, okay, I might have more time to dedicate to this. I would love to do it. And I think since then, he has been doing an amazing work. And I think it brought me to...
all the next phases of evolution. I think I would claim only the privilege of starting it, but Will is really the person who has brought it to the tool that we all use and love today. So I would take this opportunity to thank you, Will, for doing all of that.
Yeah, the whole open source model is kind of broken. The only way for it to work has always been when you have an open source project that's backed by a benefactor, a large tech company who's doing it out of marketing or some other needs. Otherwise, people are busy enough with their full-time jobs that they have to do this other thing on the side. And people that use these open source projects are also quite
quite picky and loud as well sometimes. And so it's definitely not easy to balance your professional life, personal life, and all these other open source people asking you for changes. So I totally understand why you needed some help and need to move away from it.
So I guess in that case, are you still involved with the project in terms of planning what's the next steps for MIDI? I know right now it's version 5. I'm still stuck on version 4 because the ESM only requirement for version 5 is still a blocker for me and I think for quite a lot of other people as well. What's the future for MIDI from where you stand?
Absolutely. It's probably a good time to clarify my current involvement. I'm still involved. I'm still officially one of the maintainers. But to be fair, my involvement is fairly marginal. I'm more...
I don't know if I want to call myself an advisor, but that's probably closer to what I do. I've been doing some stuff around TypeScript, but mainly because that's probably another problem that currently exists that we can talk about later in this chat. But yeah, I was probably the best person to do that, even though I wouldn't consider myself the TypeScript effort that we would need for that project to really do a good job with TypeScript.
But I was at the time the best person to do it, so I did most of the work around TypeScript and trying to help a little bit when TypeScript comes up again in conversations or issues. So yeah, very limited involvement. It might be maybe a couple of hours every few months, just have a chat with Will and see how I can help. But yeah, he's still doing most of the heavy lifting, I'd say.
And yeah, in terms of what's the current status and what's next, my view, and I'm going to try to report also Will's view because I was asking some questions in preparation for this interview, is that right now MIDI is in a pretty good shape. There has been a lot of effort in improving the performance, improving the stability. So I think especially...
From version 3 to 4 is when there was a lot of effort in that area. So even if you're using version 4, I think you still get most of these things. Then version 4 to 5, the focus was more in doing that breaking change and going from dual mode to ESM only. And it has been a little bit of a pain. I think we are still happy that we took that decision because we see the future to be more ESM and probably eventually...
CommonJS is just going to go away. And just doing that simplifies a lot the maintenance of MIDI. In a way, we see less bugs because we used to have lots of bug reports just because of the dual mode and configuring all of that dual mode correctly. So people who wanted to use ESM before didn't really have a good time. Now people that want to use ESM should be relatively straightforward.
And there was also another article by AWS, it's published in the Compute Blog by Dan Fox, who is a principal specialist solution architect for serverless. And where there are some benchmarks that show that if you use ESM, you get a little bit of a better performance, especially in the cold starts. So that was another motivation for us to try to promote more
the ESM way. Now, of course, we understand that not everyone is ready to do the switch, but hopefully if you're building a new project, you can just do it with ESM. And some of the pain points that still exist, unfortunately, are when you combine ESM with TypeScript.
And it's not necessarily anyone's fault, I would say. It's not really Midi's fault or TypeScript's fault. I think it's just that TypeScript has such a widespread of configurations that you can have. And especially when it comes to selecting the input module system, the output module system, the ECMAScript version that you want to target, there is so much variability there. It's like a huge matrix of
possibilities that you can have. And I think there are lots of configurations that just don't work with how MIDI is configured in ESM. So we get lots of issues just because people come with all sorts of TS configs and MIDI doesn't work straight away, so what do I do next?
And I don't know if there is really a universal solution that can work in all the use cases. We have identified a TS config that works with the current ESM build of MIDI. So that's what we are suggesting in the docs. Like if you have any issue, try to see if you can adjust these parameters in your TS config and see if they should go away.
So hopefully that fixes the problem for most people, but it is still a little bit of a journey to see people coming in and say, hey, look, this doesn't work. I'm using TypeScript. They don't even tell you what kind of configuration they have because not many people realize that TypeScript is not really a language, but it's more like a
a universe of languages that really depend on your multiverse. The universe you pick in that multiverse depends on the TypeScript config you have. So yeah, once you get that conversation going and you point them in the right direction in the docs, I think most of the people are able to solve their problems. We still get a little bit of issues with the
typing, because typing a middleware correctly is actually pretty hard. I didn't realize that when I started to write all the TypeScript, because a middleware is so dynamic in nature, like it can change all the types with things coming in and things going out. So you have to embed all of that logic and how you chain different middlewares. The type system should keep track of all the chain and make sure it gives you the right typing at the right place.
It is extremely hard to do that correctly, and I think we still have a few bugs here and there in our typing, in our types. And sometimes we get bug reports saying, hey, I'm doing this particular chain of events, and I'm getting a type skipped error. And then we have to kind of deep dive and figure out that maybe we need to tweak slightly the typing system to, sorry, the type declaration to support that particular use case. And this has been a little bit of a pain from a maintenance perspective, because
We don't really have a lot of TypeScript experts in the core team. Right now, we have one of the contributors, Naur Pellet, who is helping us the most, trying to sort out these issues. He seems to be the one with the most experience. So definitely, I thank you for him. But yeah, if anyone is interested with TypeScript experience on helping on media, I think there would be probably an area where we could do much better for the following releases. So open call for anyone who wants to contribute.
Okay, sounds okay. Interesting. Yeah, I have come across a few other people who's one of one clients in particular build their own middleware thing in the TypeScript. And the reason why they decided to do that was because with MIDI didn't have that good a story when with TypeScript at the time. There's no type information as you write a new middleware. So they wrote their own thing. And I guess that their implementation is much more
constraints, so maybe they didn't have quite the same problems that you had in terms of the types, higher flows from one middleware to another. But I guess talking about types, Rust. How did you get into Rust, and what's your current feeling about Rust versus doing Lambda functions with Rust versus, say, JavaScript?
Yeah, I think in general, it's probably worth clarifying why I got interested in Rust in the first place. And as I said at the beginning, I have quite a varied background when it comes to having used different programming languages. But when there was the hype between Go and Rust a few years ago, probably three or four years ago,
I realized that all the languages that I've known and used for the most part of my career, they are all kind of high-level interpreted languages. Some of them, maybe they compile to a virtual machine, but I really didn't have a lot of experience with a lower-level language. So that was the reason why I got interested in Rust, because I felt like
I don't necessarily need to become an expert. I want to learn a little bit just to see the difference between what I've been using for 15 years and maybe a language that is a little bit lower level and what are the opportunities that open up then.
And I was actually quite surprised. Initially, actually, it was a bit tricky. Initially, the journey wasn't as smooth as I expected because there is so much to learn. Even just memory management, I studied that at college, but never really used it more than that. So I maybe knew some of the things theoretically, but then trying to put those things in practice, there is so much more you need to learn. So initially, it was a little bit of a bumpy ride, but then I kind of fell in love with Rust and the entire ecosystem just because...
It's a language where they spend a lot of time making sure that the developer experience is good. And even the compiler gives you amazing errors that kind of teach you what to do next to fix your issues, which is something that I wish existed in other languages.
And also I realized that as a language, as a pretty widespread of use cases, like you can go from building your own firmware in, I don't know, Raspberry Pico to building a web applications, even the front end of a web application, so a full stack web development framework, if you want. So because of that, I think it's a language with lots of potential and the community seems to be loving it. So after a few years using it, I started to realize, okay, what happens if you want to use it in a Lambda?
And then I realized that it's probably one of the best use cases for the language in the cloud, just because the pricing model of Lambda fits so nicely with the characteristics of Rust. You get the best call starts, best memory utilization, and generally amazing performance compared to other languages. So you are basically reducing the two dimensions for pricing, which is memory allocation and time of execution.
Plus you get very, very efficient cold starts. I've seen an average of like 10 to 20 milliseconds. So you can almost say like you don't have cold starts at that point. Like if you're building an API, you can definitely ignore the cold start for most of the use cases. So that's, I think, one great proposition to consider Rust for Lambda. I don't know if you want to talk more about that, but that's kind of my self-speech at the moment.
Yeah, and I guess the flip side of that is that you have to learn a whole new language. If that's what it takes to build an efficient API on Lambda, then maybe Lambda itself is the problem. And that's why I'm much, well, I understand all of that argument, and I do agree with them. But I think that's...
in terms of making Lambda more accessible for more use cases, we need to meet the customers where they are. And that's why I'm much more so excited about our RT, which of course is written in Rust and implementing all the JavaScript APIs in Rust instead of JavaScript itself. And that's why they're able to, you know,
one of the reasons why they're able to have such good performance for at least for right now. But we don't know what once they implement everything else, all the missing APIs and go v1, if it's still able to maintain the current level of CoStar, which is very similar to what you mentioned there, single digit, sorry,
just low double-digit millisecond cost for JavaScript functions and very small memory footprint. And yeah, Go and Rust is definitely more designed for system programming where you're much more conscious about efficiency
And you want to have a small footprint on the user process as possible so that your system process should have a very small footprint so that you leave as much resources available to the user to do their own thing, be really graceful. But yeah, and I have played around with Rust before.
way back before it was 1.0 because I went through this whole journey of a couple of years I was learning a new language every year and Ross had this new idea about this ownership system in terms of resources so that you don't have logs and all these other expensive runtime concepts and
And so I was really interested in how that works and spent quite a bit of time playing around with Rust to partly understand how that mechanism works. And it's very clever, but I think for a lot of people that are coming from JavaScript, understand that, understanding the types of language, having a strong type system is a bit of a barrier. I do worry that if we are promoting, you want to use Lambda, you should use Rust. That may be quite difficult for people to accept
in terms of for mass adoption. But I totally get it. Rust is a really good language for performance, for efficiency. So do you have any advice for people who are coming from JavaScript and want to learn Rust? I know you've got a book. Is that kind of targeting at that demographic?
I think it is, yeah. So the book is called Crafting Lambda Functions in Rust and the website is lambda-rust.com.
And I think we'll have that. Sorry, it's rust-lambda.com. We'll have that in the show notes, hopefully. But yeah, the idea is that, at least my view is that it is a little bit of an investment if you don't know Rust already. I think Rust is becoming more prevalent in general. Like I'm seeing more and more companies adopting it in one way or another. So I think that there is also a timing element there. I think if we have this conversation again in five years down the line,
Probably people, generally speaking, will be less worried about learning Rust because it will be more common to see that as a language in the industry at different levels. But right now you are definitely right that there is a little bit of a barrier there that if you really want to leverage all the benefits that we described in the context of Lambda, but you don't know Rust, you need to learn a little bit of Rust first. Now, I will say to counter that a little bit, that the amount of Rust you need to learn for Lambda
it's probably less scary than if you just say, I want to learn Rust in general to do everything or to do even system programming, which is kind of a much deeper thing where you need to be much more conscious of even how to use memory. You cannot take lots of shortcuts. But in the context of Lambda, there is... Actually, there is a nice talk that maybe I can share the link with you at some point later. And if you want, you can add it in the show notes that is called Easy Mode Rust.
that kind of shows that there are two levels of Rust, if you want. The one where you can cheat a little bit, but it's just much easier to use. Like maybe you can do some memory copies here and there just to make your life easier. And things like that where basically you are not necessarily using the most performant solution that you pulled, but at the same time it's giving you an easier access to use Rust. And
You can do that consciously and then realize, okay, later on in the future, I can learn maybe how to use references and make this code a little bit more performant. That's kind of the whole idea. So I think in the context of serverless, you still get amazing benefits even just using Easy Mode Rust.
And then down the line, if you really want to invest, and again, it's still just an investment that you need to consciously do. But if you want to invest more, you can say, okay, now I want to become a bit more of an expert in Rust, learn the more advanced patterns. And then next time I write code in Rust, I know I can write more efficient Rust if I need to.
So that's kind of the way I would see it. It is still an investment today. You need to see the long-term game there. You need to be open to learn new things, which of course is not always for everyone, depending on your constraints. But if you do all of that, there are benefits down the line. So I would basically tell people, do your own estimate, do your own reasoning and see if it's worth your time, if it's worth learning and investing on this technology rather than maybe
using other tools that give you different trade-offs. And I'm also really excited about LL or T. I think it's going to... If it goes ahead and if it manages to fill that gap between the Node.js standard library and what they support, if they get really, really close to basically take any code from Node.js and run it in Lambda, I think it's going to become an amazing runtime for Lambda. So really excited to see what happens there. But right now...
It's a little bit of a hit and miss because if you are writing a new Lambda from scratch, maybe you can avoid to use certain things that are not supported. But if you're trying to port something existing into LL or T, I've seen that most of the time it doesn't work out of the box. You need to figure out what to tweak, remove some dependencies, rewrite some code. So it's always a little bit of an investment anyway right now.
Yeah, right now they're not ready for production use yet. It doesn't work with... I don't think there's complete support for the stream API or a number of other things. So, maybe it doesn't work. The Lambda Power Tool doesn't work. They are prioritizing supporting everything from the AWS SDK. And I think you're right, though. Most Lambda functions are just calling a few APIs, calling AWS services.
There's not a huge amount of complex business logic. And I guess if you do have that, then maybe you can always on the per function basis switch to something that you're more familiar with. But for the simple things that are latency sensitive, maybe do a simple mode of Rust
Just know how to install the AWS SDK, how to call those different services, what's the syntax for that. But how good is the supporting ecosystem for Rust for Lambda in terms of deployment, in terms of packaging? What do you use? Do you use the server framework, CDK or something else?
Yeah, that's actually a really good question and something where I was surprised to see such a good ecosystem because it's still a relatively new thing and I don't think there are so many users at the moment leveraging this ecosystem. So I am impressed about the work that mostly AWS has done with help from some contributors. So there are a few pieces that are really important to mention. The first one is that it's still a custom runtime. So it's not like you get a managed runtime. So you still need to
somehow provide the runtime yourself. But that doesn't mean that you have to write the runtime from scratch. There is a runtime that is provided and maintained by AWS that you just install as a library into your project.
And of course, bootstrapping a project and figuring out how to install this particular runtime and coming up with kind of a skeleton or a boilerplate of a Lambda might be tricky on its own good. Like if you've never done Rust, that might be putting you off straight away. So they also created a tool that is called Cargo Lambda, which is an extension of Cargo, the package manager, the default package manager you have with Rust. So when you install Rust, Cargo is like the equivalent of NPM for Node.js.
And when you have cargo Lambda, you can just say cargo Lambda init, and it's going to give you like a guided experience where it's going to tell you something like, do you want to do an HTTP Lambda? And if you do, what kind of events do you want to support?
If it's not an HTTP Lambda, what other events do you want to support? Is it an SQS? I don't know, SNS, whatever that is, you can pick from a drop-down list and it's going to create all the code that you need with examples to start from there. And it's going to automatically install all the dependencies that you need, including the runtime. So I think that's an amazing experience because you literally don't need to know anything about Rust and Lambda. You just need to install this tool, run one command,
and then you have something that you can deploy straight away. Of course, you probably want to change some code and tweak it to make it do the thing that you want to do, but you are not starting from zero, which would be a lot more, I guess, to learn such a larger barrier to overcome for most people. And this tool actually does a lot more. It doesn't just do the scaffolding. It does also the
Local testing, so it runs a local environment that automatically watches your code. So if you do changes, it's going to recompile your code and has this kind of an emulator mode where you can invoke
And it can also build for different targets. So regardless of your operative system and CPU architecture, you can target, for instance, Linux ARM, which for Lambda is generally the cheapest. And it can be even a little bit more performant depending on the use cases. So that's another win because you don't really need to worry about how do I compile it correctly. Just run the build and it's going to do the magic for you.
And finally, the other thing is that it integrates really well with SAM. And there is also a construct, I think, for CDK. And for Terraform as well, I think Serverless TF, the project by Anton Bank, I think has something in that regard to make it easy to work with us. I haven't tried yet, but that's what I'm hearing.
So basically the idea is that when it comes to publishing, you can just run cargo lambda, I think it's called deploy, cargo lambda deploy, and it's going to build and deploy for you. But that deployment doesn't have a proper infrastructure as code setup. So generally you don't want to use that, but you want to use the integration with SAM or CDK. Behind the scene, it's going to use cargo lambda to do the build, but then the publishing and the deployment of the rest of your infrastructure, wiring everything together, you
in the right order, respecting all of the dependencies is done with infrastructure as code. I think when you combine using Cargo Lambda with either SAM or CDK, then you get an amazing experience. Like you just need to know that little bit of Rust easy mode and then you get lots of benefits. Maybe I'm overselling it, but that's the way I'm seeing it right now.
Right, gotcha. So use SAM to wire your function against everything else because function is just one part of your overall architecture. And then use Cargo Lambda to do the building and initializing dependencies and generating code samples and things like that. Okay. And actually, just to complete that statement, SAM has a direct integration with Cargo Lambda, so it's not like you need to do it
two steps individually. You just need to tell SAM, this Lambda is a Rust Lambda, so compile it with Cargo Lambda. So there is a little bit of metadata you need to add to your function definition. And then you just do the usual things you do with SAM. You can even do the local HTTP gateway
and it's going to build and run your Lambda and allows you to test it with an HTTP endpoint that works locally. So you get kind of the best of both worlds without having to worry too much about how is the whole building happening behind the scenes.
Right, gotcha, gotcha. Okay, that makes a lot... Well, that sounds much better than I thought in that case. What about cross-cutting concerns? Is there going to be a MIDI for Rust in the pipeline? Yeah, that's actually a really good question. It's something that I was thinking for a while. And to clarify that, I think MIDI is actually two things together. So the middleware engine is what we call the core, and actually the package is called core.
That's one component. That's kind of the main thing you need to use to enable the ecosystem. But then there is also an ecosystem of official middlewares that solve what we have seen are more or less the most common concerns that people have. So in Rust, there is this concept of...
what they call a service trait that comes from the Tokyo ecosystem. And Tokyo is the async await runtime that exists in Rust. Without going too much into detail, basically what they did, they defined an interface that says, basically kind of represents the request response pattern in the most abstract sense. And with that, if you implement that, you automatically have a middleware engine that is built in by the library.
So when they build the Lambda runtime, they leverage all this ecosystem. So your handler function is already leveraging this interface. So there is automatically a middleware engine that you can use. And you can actually use middlewares that exist even outside Lambda. So if you have a concern like,
I don't know, processing some inputs in a certain way. And that concern is the same maybe in a containerized application like a regular web server. You could be using the same middleware in both Lambda and that particular application just because it's part of the ecosystem more than something that is Lambda specific.
Now, of course, with Lambda, maybe you have different kinds of events. It might not always work. It might not always make sense to use all the mid-dorwards that you have somewhere else. But generally speaking, it's literally the same library. So you could be reusing a lot of code, even in different contexts. What is missing, I think, is this rich ecosystem that we have in MIDI is not yet available in Rust. So maybe there, there is scope for building a project around that.
It's not something that I'm actively working on right now, but I think it might be something that will become more and more of a need if more and more people will build Lambdas in Rust, just because these concerns are going to become so common that eventually somebody is going to build Middlewell for that particular use case. Right, yeah, because I imagine that would be quite a useful piece in the puzzle for easy mode Rust Lambda users.
Yes, yeah, for sure. Okay, that makes sense. And I guess one last thing I want to ask you as well is the fact that, okay, you are obviously part of the 80% Heroes program, but you're also part of the 80% Microsoft MVP program as well. And I think you have been for quite a number of years because you were involved in the .NET community and sometimes you continue to be involved with TypeScript and all of that as well. So any...
I mean, for people who are, I guess most of us are either in one or the other program, not many people are in both programs. Is there anything that you can tell us about how the two programs compare in terms of the kind of things that, or maybe perks, or maybe the things that you like from one or the other? Yes, that's a really good question. I don't know if it's easy to compare the two, that they are very different.
Just because I think that the hero program in AWS is a little bit of a mystery. How do you become a hero? There is really a formal process. Of course, when there are new heroes announced, everyone agrees that those people deserve to be heroes. So,
There is somewhat of a criteria that is determined by the contributions that you do with the community and sharing knowledge, projects, whatever. Every hero, you can definitely recognize them for something. I'm not trying to say that it's not fair. It's definitely fair. It's just that it's not always clear. If you wanted to be a hero, what do I need to do to be one? I don't think there is an easy answer to that.
While in the Microsoft MVP program, they have a much more structured system where basically you need to be nominated by somebody, but then you basically get access to a form system where you have to explain all the kind of activities that you have done in a specific area. And then Microsoft is going to evaluate you on those. So the process is a little bit more straightforward. And that's maybe one of the differences that I've seen between the two programs.
The other thing is that the Microsoft one, there are of course different areas and most of them are specifically to Microsoft products. The one that I am on is kind of a more generic one that recognizes people that do contributions to the tech world. So that they're more recognized like open source activities, public speaking, articles, sharing knowledge, all of that kind of stuff.
And it's funny that every year they have to renew this title and they ask you again, can you please fill this form where you have to mention what do you think are the main contributions that you have done that year?
And most of the time I'm putting AWS related stuff there and they still accept that. So I don't know if that's going to change in the future, but most of my focus, I think is still going to be more AWS than Microsoft related stuff. And that seems to be fine for that particular type of MVP program that I'm part of. So yeah,
I don't know, maybe that gives you an idea of how the two programs compare with each other, but yeah, they tend to be a little bit different in many ways.
Right. And I guess the EDBS has got the community builders program as well. I guess in terms of process, that feels a little bit closer to that in terms of the self-reporting aspect of it. But I think they have introduced some element of that renewal process for the Heroes program as well. Every two years, they ask you if you still want to be involved.
Okay, but what about in terms of the recognition? Do you see that the Microsoft MVP program, do they do as much, I guess, one of the nice things about the Heroes program is that they kind of engage you with the teams, they give you access to different kind of previews to services, they encourage the feedback from the Heroes for new features that teams are working on. Do you get something similar on the MVP program?
Yeah, there is something similar that happens actually quite frequently. You get invited almost like weekly to different meetings with different teams inside the entire Microsoft organizations where they think you might be interested. You would probably get an invite. To be fair, they rarely apply to me because they are more often than not related to Azure teams.
services that I rarely use or I never use at all. So I think I personally find more value in the AWS ones just because I'm doing much more AWS than Azure, which I just do very occasionally. So maybe it doesn't apply 100% to me, but I think if somebody was more focused on Azure, it would be extremely valuable to have that kind of experience where you can see the previews of new features, talk with the teams,
very much in line with what you get by virtue of being an hero in the AWS program. Right, gotcha. Okay, that makes sense. Yeah, so I think that's everything I've got. Luciano, is there anything else that you'd like to share before we go?
Let me see. I had some notes. I think we covered most of the things that I want to share. So thank you so much for that opportunity. I just want to maybe mention that for me specifically, we had two sponsorships, one from Fortirium, the company that I work for, and one from AWS itself, which, by the way, was amazing to receive a sponsorship by AWS for an open source project. It's not very common to see that. So I was amazed by that.
But right now the sponsorship is basically going directly to Will, who is doing, anyway, 99% of the work. So it's only fair. What I will do is, if you don't mind, suggest people that want to support or companies that want to support the project to consider donating to Will, supporting him as an open source contributor, because basically 99% of his open source time goes to MIDI anyway. So you are effectively supporting MIDI at that point.
Right, okay. So in that case, how do people sponsor? Do they just go to... Is it like a GitHub thing that you sponsor or is there a different link that you have to go to? Yeah, we have a link in the docs, but in fairness, depending on the amount, it might make sense or not to go to GitHub sponsorship. I think above certain amounts, GitHub sponsorship doesn't make it too easy if you want to do like a one-off big payment rather than a recurring payment every month.
So we had different experiences in the past, but if you just go to that link and engage with Will, I'm sure that you can figure out what's the best solution. Right now, if you just want to donate a small amount every month, probably GitHub sponsorship is the easiest. And you can easily do it just with a few clicks. Okay. I think I found the link to the GitHub sponsorship. It's part of the repo and...
Yeah, okay. I will find the link to the, if you want to do one of the sponsorship payment as well, I'll put that in the show notes down below so you can have a look. And of course, there'll be links to the MIDI project itself, as well as to Luciano's book and Alibis Bytes. Okay, I think that's everything I've got. Luciano, again, thanks so much for taking the time to talk to us today. And yeah, hope to see you soon, I guess.
Yeah, likewise. And thanks a lot for having me. It was a blast. And yeah, you have to send me the check later for all these links. No worries. Take it easy, guys. I'll see you guys next time. Okay, bye-bye. Thank you to HookDeck for supporting this episode. You can find out more about how HookDeck improves developer experience for building event-driven architectures. Go to hookdeck.com slash theburningmonk.
So that's it for another episode of Real World Serverless. To access the show notes, please go to realworldserverless.com. If you want to learn how to build production-ready serverless applications, please check out my upcoming courses at productionreadyserverless.com. And I'll see you guys next time.