This episode is supported by Memento, a serverless cache that you can trust and only charges you for what you use. To learn more, visit goldmemento.co slash theburningmonk. Hi, welcome back to another episode of Real World Serverless. And today we're joined by Matthew Napoli. Hey man, how are you doing? Hi, thank you for having me. I'm doing really great. I hope you are as well.
Yeah. And we, yeah, we finally caught up in person at the recent AWS Hero Summit in Seattle. And I've been really interested about breadth for a little while. I've heard a few people talk about it publicly and I spoke with, I think, um,
one company who is using breadth and they were really happy with it. And so yeah, I wanted to talk to you about breadth and maybe we can start by this PHP framework for building the Lambda functions. And you mentioned when we had a conversation earlier that PHP has some interesting life cycles, which makes it quite different from other languages. Before we get into that, do you want to take a moment and just introduce ourselves?
you know, where your background is, how you get into serverless. I know you used to work for the server framework for a little bit as well. So yeah, tell us a bit about yourself. Sure. So yeah, to start off, I live in France. And at the moment, I'm a consultant. I work, I create my own company, mostly based around, you know, working with Bref and creating serverless applications with PHP. So the main thing is I
Yeah, some background. I'm a developer. And the main thing that got me started with serverless is that I never enjoyed doing the sysadmin stuff. I have worked with servers. I have worked with containers. I just don't wish that on anyone else. I wish it was simple as, you know, I write my application and I deploy that into the cloud and it just works. And obviously, we both know that it's not just as simple as that, even though it should be.
But that's how I got started into the serverless AWS Lambda movement, if I can say. And I've almost always worked with PHP. So when I learned about Lambda, I wanted to run PHP on Lambda. That was in 2017 at the time.
and PHP was not supported natively by Lambda. It still isn't supported natively by Lambda, so I started a project called BREF, which is an open-source PHP runtime that basically lets you run PHP on Lambda and run PHP frameworks on Lambda. And so I started out as an open-source project. At the time I was employed, I had a job, then I quit the job, and I wanted to start working
Ideally full-time on the project, but obviously you have to earn money somewhere. So I started doing some freelance jobs, consulting. And so I've been doing a combination of that since 2017. I did work for a while on the serverless framework itself at the company. And yeah, it's been two years, I think, two or three years, which was really, really interesting to work on such a large open source project.
Right. And I guess with PHP, you've got this almost similar kind of execution model to Lambda, that one request is processed on its own. There's no concurrency in that sense, in that single process.
I guess for folks who are not familiar with PHP, can you just explain how this lifecycle with PHP differs from other languages like, say, Node or Java, which take a different approach to concurrency? Yeah, I think it was kind of a meme at the time that people in the PHP community were discovering Lambda and they were saying, oh, this is just like PHP, but for all languages. And
People in other communities would discover Lambda and say, "Oh, this is just the PHP model applied to other languages." But this is a bit more complex than that, obviously. So the majority of PHP applications running out there run with an execution model, like an engine called PHP FPM, FastCGI Process Manager.
The idea is very simple, is that in PHP, your code runs and handles one request at a time.
And so if you have 10 requests in parallel, you have 10 sub-processes, like PHP sub-processes. So one handles one request at a time. So when you write your code, you never think about, oh, is this variable or this memory space being accessed or used by different requests? Or if I set a global, I don't know, like the user session in a global variable.
maybe I will leak the session between different requests. That doesn't happen with PHP, and so that makes the experience quite simple to get started with in programming applications. And so with Lambda, you have kind of the same idea where one Lambda instance handles only one request at a time or one invocation at a time. So that's why I joke and say that PHP is the
one of the best languages for serverless because of this execution model. But it's kind of a paradox that it's not the one that is like the one of the major language that is not supported by Lambda.
Yeah, at least not natively. I guess that's where breadth comes in, right? So with breadth, I get that you've got a way to structure your application that is going to deploy to Lambda. But do you also come with a custom runtime for running PHP applications as well? Yeah, exactly.
Just for fun, the history of Bref started with, at the time on Lambda, there was no such thing called the custom runtimes. There were a few runtimes that were supported, like Node and Python and some others, and you couldn't do anything else. And so the first Bref versions actually were Node functions with a Node script,
That would execute PHP as a sub-process and pass the request as a CLI argument and get the response as a CLI argument back. Which is, in a sense, what was done many, many years ago with Apache, with CGI, I think it was called, which is not really efficient, I agree. But, you know, it works. And I know some companies actually put that into production, even though it was really experimental at the time.
But that's how it used to run. Now it's very different. Obviously, Lambda introduced the concept of custom runtimes. So you can add support for any language.
on Lambda. And that's what Bref does. So instead of you compiling PHP, like the binary for it to run on Amazon Linux, and for you to write all the glue, then Bref provides that for you. And so you have what is called a custom runtime, which is also called a layer. It's a zip file containing the PHP binary, containing the
The glue that starts the PHP binary, starts PHP FPM, connects with Lambda and the custom runtime API and does all of the connection needed. And so if you are a PHP company or developer, you don't have to think about how do I run PHP on Lambda. You can just use that custom runtime. It feels like it's natively supported runtime.
Right, okay. Yeah, and I've been on the website, I've seen that the syntax you use to configure your functions and your application is actually very similar to the server framework. I guess it's that way you've taken some of the inspiration from how to structure your application and specify your functions.
I guess while we're here, do you have anything that you can maybe show us so that we can, so for folks who are not familiar with breath, they can see what it's like to kind of work with breath and how to deploy your functions and so on? Yeah, sure. I will share my screen. And actually the breath at the moment uses serverless framework. This is like the serverless framework. The way it works, so this is the same YAML syntax that you have in serverless framework. It means you have to install
serverless framework on your machine, so the serverless CLI. And this is an example of serverless.yml, same file. You have to install the serverless framework CLI. And the main difference is, so how do you support PHP when it's not supported natively? The serverless framework can deploy functions in Node.js, in Python, and in any language. It's just that the runtime, you don't have PHP natively. So this runtime here, PHP 8.3 FPM,
This is something provided by the Bref plugin for serverless framework.
And so you have to include that line here. And this Rev plugin will actually extend serverless framework and add support for deploying PHP runtime. So that is pointing to, I guess, a local plugin. Do you also publish the plugin to NPM as well? Yeah, exactly. That's exactly what people think when they think about serverless framework plugins. You have to publish them to NPM and install them with NPM. That makes sense. It's a node ecosystem. What I wanted to do is provide...
For PHP developers, like an experience where you don't even have to think about NPM. So vendor is actually the same thing as node modules, but for PHP. And so you install breadth with Composer, which is like NPM for PHP. So instead of package.json, it's composer.json. I install breadth. I have with this installation, I have the code for the runtime. I have PHP code, but I also have, if we open that up,
So I have like PHP code in there, but we also have an index.js file and that's the serverless framework plugin. And so I can use a relative path because I know it will always be this path thanks to how Composer works. And I can skip, you know, node module and NPM.
Right. So this is just so that folks who are working with PHP don't really have to think about Node. They just have to, I guess they still have to have Node so they can install the serverless framework. But after that, when it comes to managing like plugins and things like that, you kind of shoo them away from the, I guess, the server framework ecosystem. Just want to have them work with breath. Yep. And so most people don't understand how this works or what this does.
And I think they, like what I say is really, you don't, you don't have really to understand. It just works. It's just like if you had a breath package on NPM, you would just use, you know, breath, but just breath is not distributed on NPM because it's a, oops, it's a PHP package. Right. Gotcha.
And so you can see here, I have a warning because I have the serverless.yml validation that the IDE has, and it doesn't recognize PHP because that's not supported. But thanks to the plugin, it's actually working. It's actually supported. So I have a warning that I can safely ignore it. And the rest is honestly very, very like the usual. If you've written serverless applications using Node.js, I can use URL true, you know, but yeah, same is very, very familiar.
And I guess this is based on the server framework version 3, because I know version 4, they're making a whole bunch of different changes. Yes, this is actually a topic in the Brave community, like what do we do with these versions? So Brave currently works with v3 and v4, and I want to support both, and I want to continue to support both for the reason that from the perspective of the users of Brave,
They don't interact directly with serverless framework. And so some of them will be okay with migrating to version 4 and using the product that is no longer open source or fully open source and maybe paying a license, which is very fine. Like I support companies that want to support open source projects. This is great.
Some will want to stay on v3, and I respect that because it wasn't really part of the deal when you got started with Bref. So what I want to do is provide long-term support for serverless framework v3 and basically have people keep using Bref and serverless.tml for as long as they want. And on top of that, I'm also working on... So the thing is, right now what you're seeing is the current Bref experience. You're writing serverless.tml, and it works really great.
And for those who have used the framework, it's really great at deploying applications. Like here, I can deploy a single file, but I can deploy a Laravel application, a Symfony application, a huge application if I want to. It really starts to become complex when you have maybe queues, but mostly databases, like most HP applications have a relational database. Maybe they want to cache with Redis, maybe a VPC.
And so what I want to do is provide an alternative to serverless.tml that makes it much simpler to deploy PHP applications because most PHP applications, you know, are very standard. Like we talk about the LAMP stack, Linux, Apache, MySQL, and PHP. So most, I see most Brave users want database. That's,
That's like 90% of them. So I want to provide, that's something I'm working on at the moment, but I want to provide an experience that is much, even simpler than that. But anyway, that's in the works. So I don't want to talk too much about what's coming, but what do we have now? This is it.
Okay. Yeah. Okay. Yeah. The problem is, do you use like a breath command or do you use a serverless deploy? Serverless deploy and it will do its thing. I already deployed just before starting podcast just so that it's a bit faster, but yeah, you get the usual experience and you get the URL here. And if I open that up and I'm crossing my fingers that, yeah, it all goes well. This is the PHP script I have.
here by default and if we look into that it's a very simple php file so it's actually if you've seen php in the last 20 years it looks like that you mix up php language and html this is just for the demo most of the time it's a real framework with proper code and you have templates and everything it's uh it's much cleaner than that but that's just for the demo
Right, okay, gotcha. So in that case, for folks who are, I guess, who are used to coming from PHP background, how...
what do they have to do to rewrite the application? Because that's one of the challenges often we find with other languages as well. You know, you come from Node.js, you're writing, you're used to writing express applications, suddenly Lambda forces you, or if you want to follow the one single responsibility functions, you end up having to write your application in a different way. I guess with PHP,
You probably also have a similar thing where you are used to writing web applications using a framework. Now with breadth, you still have to kind of structure your application in a certain way so that you've got the index.php file and maybe other .php files for different endpoints.
You know how traffic spies can cause outages, right? Our sponsor this week, Memento, can help. Do you need easy-to-provision infrastructure that is robust and responsive even when you're under a huge amount of load? And do you want caching that can work at scale without the hustle of managing and scaling the cache cluster?
There's no need to be concerned about a single or a cluster of nodes getting overwhelmed by a sudden spike in traffic. Memento gives you the fastest time to market and the most reliable performance at scale. So are you ready for your next traffic spike? Check out gomemento.co/theburningmonk for more information today. Okay, back to the episode.
Do you also support the LambdaLift model where you can take your existing, I don't know, what's a popular PHP web framework and can just run your code inside Lambda function?
Yes. And so this is where, I don't know if PHP and the ecosystem is different from the other languages, but I've seen, honestly, lots of success with running monolith on Lambda.
And I know that for a long time, I was really following what best practices I heard about from different people in the community. But it just came back to watching people use Bref at scale, building real applications. And there was so much success with taking either a legacy application. I have a case study about that, which is really interesting. We can get into that later. But taking a legacy monolith or
or even like a new application written with Laravel or Symfony, like those popular frameworks. And you put that inside one Lambda function. And so the handler file that you see, index.php, it's actually the same here. With Symfony and Laravel, there's usually like one entry point, which is index.php. And then you have the framework do the routing. So you don't use API getaway for the routing. You have like one Lambda function and it runs the whole application, API, website, whatever.
And that works really well. I've seen some people try to split it up. And I'm talking about HTTP routes, but it's usually a lot of work. And when you do that, you step outside of the frameworks. And so I guess in languages where maybe...
I know I've worked with Express.js, for example, and the layer or the features provided by Express.js are useful, but they are not the same kind of features you have with Laravel, for example. So if you replace Express.js with API Getaway and its routing and its middlewares with the authorizers, you get similar value and you don't lose a lot by doing that. But if you replace Laravel with API Getaway, you lose a lot.
So I think with PHP, there's more incentive to staying with one Lambda function, use a framework, and it works really well. There is obviously one exception to that is when you want to handle, you know, you have SQS queues or you want to, you know, for jobs, like queues with SQS, you can have multiple queues. You can be using SNX or EventBridge for communication between microservices. That works really well.
In PHP, and it's used, in these cases, you have functions. And usually the pattern I see is one function for the API or in the website, and then one function for handling queues, one for event bridge events, and so on.
Okay, let's dive deeper on that because I really like the fact that you mentioned that if you are using ExpressJS and you are building application with the right level of modularity, so you're not just putting all of your
business logic into request handler directly in your ExpressJS or Fastify or whatever framework. I've worked for companies in the past where they couldn't move out of that open source web framework.
because there's no separation in the code. There's no modularity. All of your business logic is right there in the request handling bit. Instead of having your own, I guess, abstractions that handles, okay, handles create user command or whatever, you've got your own abstraction for those different endpoints that ultimately calls you into your business logic. If you're doing that, then you're,
your code is very much similar to what you have in Lambda with individual functions for individual endpoints. And API Gateway kind of replaces a lot of the configuration you have in your Express and cross-cutting concerns, authentication. The middleware gets replaced by things that API Gateway does for you. But I didn't quite understand what you meant by... I understand what you said, okay, that you don't lose too much when you do that with StateNode.
But it's not the same as Laravel or Symfony. That bit I didn't quite understand. What other things do you get with those frameworks that you lose when you do the single responsibility functions with Lambda? Yeah, that's a good question. So I want to be exhaustive, but I want to be-- obviously, this is top of my head. But if I just think about authentication,
I've been a long time like Symfony user. I've used Laravel more and more lately, and I'm just blown away by the amount of features you have out of the box with Laravel. And so just if we think about authentication and if we use Laravel for authentication and dealing with users, like out of the box, you get, obviously you get the login, logout forms and like
authentication with passwords, but you get the password rotation as well. Passwords are safely encrypted and hashed without rotating secrets. You have multi-factor authentication out of the box. You have team management. You have forgotten passwords with all the emails that are being sent for you. You can have some rate limits on the authentication pages.
I know there's just so many things you get just like for this single bit, which is authentication. And you have obviously the permission system where you can set permissions on different routes and have middlewares to authenticate, like say these routes are private, these are for public pages and you have roles. So you can say for certain users, you can access this path and not this path. So when you look at just this part, you have so much with Laravel and if you drop
like the authentication from away from, you know, the whole request flow of your PHP app, then you lose all of that.
And if you look at all the other middlewares or, you know, HTTP logic that you can get with Laravel, yeah, that's something I don't want to just drop that. And so it's kind of the, and I'm not saying this is a decision that is very simple for everyone and it's always the same, but I see lots of smaller or medium companies where using the framework and maybe being not decoupled from the framework
makes sense because you gain so much by using the frameworks to its full potential that it's worth it. In some other cases, obviously you may want to decouple from the framework. And to me, it's the same logic as decoupling from AWS. Some companies might be like, oh, do we need to have an abstraction layer between our code and
the AWS SDK and the Event Bridge, Event Message Format, or even the SQS behaviors and anything that AWS provides? Do we need to abstract away from that? I'd say it depends because if you just start using the AWS service, then you can get started and it works. Yeah, depending on the company and your goals, you may want to decouple or you may want to use the thing immediately and get its full benefits.
Right. Okay. So I guess this kind of reminds me a little bit of the kind of things you get with, say, using Amplify, because Amplify gives you additional directives and things like that on your GraphQL schema so that you can model some of the fine-grained access control bits.
which you may have to kind of implement between Cognito and your own resolver codes. So by modeling your kind of your domain, Amplify kind of gives you more tools that can do more for you. I guess it sounds like a similar thing here as far as authentication is concerned, that just the middleware itself provides more out of the box, including the login, logout screens, which is something that you have to kind of
build yourself if you want something that's more, I guess, brand compliant. You've got your own logos and colors and things like that, which is not as easy to do with the Cognito. So hosted the domains. Okay. So we mentioned, I guess, Laravel a few times. If folks are using Laravel, there's also Laravel Vapor as well. So what's the kind of decision there? How do they choose between, say, using Breath versus Vapor?
Yeah, that's a good question. Breath is a combination of things. Vapor is a combination of things. So to clarify, you have the PHP runtime itself. In both cases, it's almost the same thing. It works between Vapor and Breath very similarly.
I don't think there are differences that people may notice. The main difference will be with the experience of deploying, I would say. Vapor is not based on CloudFormation and it's not based on serverless.yml or something like that. Most of the deployment happens... You have a bit of a yml config file, but most of the deployment happens with their... They have a SaaS. It takes a SaaS deploying into your AWS account. And so you have a UI and you go through Vapor to deploy into your AWS account. And
Yeah, that's about it. With Bref, you have the serverless.yml file and you deploy with CloudFormation serverless framework into your account. Something I didn't mention as well is that Bref actually works with CDK. We do publish some CDK constructs. It works with SAM or even Terraform because it's, again, you have the runtime and then you have
whatever tooling you want to have on the deployment. So if you are not interested in serverless framework, you can use those things. So yeah, the main difference would be that Rev deploys from your machine to AWS, Vapor deploys from the SaaS into AWS. And because they do that, they have the ability to
for example, provide some metrics, a dashboard where you can view some information about the applications that you deploy, which is something, honestly, I really like. The experience with Vapor is something I aspire to have at some point with Bref. So what I did, a couple of years ago, I released the Bref dashboard, which is an alternative to the AWS console where you can view your applications, logs, metrics, etc.,
But again, since I don't have a SaaS, you have to run the applications locally on your machine, and it uses your local credentials and connects to AWS and shows whatever you have access to. Does that make sense? So the model is a bit different here. And I actually... Yeah. Go ahead, go ahead.
And I guess because you're using server framework and therefore CloudFormation, you also get additional things that CloudFormation gives you, things like being able to set outputs and exports so that you can have multiple applications that potentially reference each other because you're able to create those, say, CloudFormation output or create things like SSM parameters so that you can share your API URL with other services that may want to call your API
or reference other services, pass information around like shared VPN, VPC IDs, security groups and things like that where I guess the VAPOR is more built around Laravel itself, so less about the CloudFormation. So if you want to bring in other things into your application configuration, do they let you say configure say a DynamoDB table as part of the application or other things like that?
No, but you own the AWS account. So let's say you, and I've seen that several times where people start with Vapor and they have the Laravel application working, you have the database and it's working fine. But then you want to step out of just the LAMP stack basically and have like use EventBridge or SNS or even SES and DynamoDB and whatever.
you have to do it manually. You have to figure out AWS on your own. Then you're completely outside of vapor. And so I would love to have something as good as vapor in terms of experience, but with the constraints, that's like my goal, you know, that my, my North star is like that experience with, um, you know, uh,
Finding a good solution so that you are not limited to having one SaaS, having admin access to your account, which is a blocker for some companies, and staying in CloudFormation and providing you the tools to more easily step out of the base. Deploying the Laravel app is one thing, and then you want, as you said, like DynamoDB tables or whatever. You can do that without having to start from scratch with AWS.
That's something I'm exploring with something I call BraveCloud, which is a project I'm experimenting with at the moment, where you have a SaaS that can deploy in your account, but you can also self-host things, which is a way for me to solve the problem of third-party access to the account. And it's all based around CloudFormation. So you can have, just like in serverless.yml or SAM or CDK, you can bring your own config
or constructs and deploy DynamoDB tables or whatever you want. So you can start easily and then expand without having to restart from scratch. That's kind of my vision. I'm trying to keep these constraints in mind to find a good experience because I'm not, so the audience for breath is not, I think the same as what you may be experiencing like
Some breadth users and companies, they want to use AWS. And they see breadth, they're like, oh, maybe we'll use breadth. But we want to use AWS. We'll train on AWS and we'll learn AWS and we'll use as much as possible of it. That's like a minority. Most of the breadth users, they want a scalable, redundant, easy-to-deploy application without starting a Kubernetes cluster and getting too crazy. So AWS is a plus, but they don't really want to
be trained to become an AWS master. So I want to provide the path to get into AWS as easily as possible. Right, okay. So I guess in that case, what you're saying is that, well, let's continue. As far as breadth versus layer of vapor is concerned, layer of vapor is about deploying your PHP application onto Lambda so that you get all the scalability, redundancy, and the DevOps automation and things like that.
but it doesn't quite help you once you want to get out of just your code. You want to start using other AWS services, whereas Breath, because it's CloudFormation-based, so that you can create an application, an AWS-based application, a Cloud-native application that uses PHP, you can have all the different tools. But what you are then saying is that actually a lot of companies are not the kind of, you know, want to build a Cloud-native application, more just want to
build a PHP application that runs in the cloud. So maybe they're not thinking straight away. They want to use EventBridge. They want to use SNS. They want to just start with a PHP application that uses a database that connects to, say, RDS. Okay, right. Got you. Absolutely. Like a majority will actually have one ASP.
AWS CloudFormation stack, one application, or just a few, but we are not talking about microservices working together. You have one big application and you are looking for an alternative to... The path usually is you start with a VPS or a server or whatever. You have one thing and it works
But then you cross a threshold where you need multiple servers for redundancy because the business now requires it, or you are hitting scaling limits. And so you need to go from one to N. And so you have the option of, do I want to maintain multiple servers and do load balancing and do recovery in case one crashes or whatever?
do I go with Kubernetes and have n being infinite, so then I have to manage the cluster, or do I go with Lambda? So Lambda is the alternative. But you still have some companies going all in on AWS and microservices, and they don't follow exactly the same pattern, but I would say this is a minority in the lot. And usually I call Lambda and Brave the gateway drug to AWS because they get started. Then they're like, oh, actually,
We could use SES and we could use an event whenever there's a new file uploaded on S3 to do this thing. And maybe we could push that into that new service so that the marketing team or whatever can do complementary work. And so they get started slowly like that with AWS and then they expand. I found that really interesting to see. It kind of enables new things once they are in AWS. So that's not the first goal usually.
Right. Yeah, because that makes sense. Because on PHP server, if you're just building on, if running on a VPS, then you can have cron jobs. And I see people do that, you know, all of the cron jobs and the background tasks get all run on the same thing as your API server. And obviously, that's not going to really work on the Lambda's execution based server.
model so you have to kind of start thinking okay if I need to have a cron job that runs every you know at the end of every day I have to I can't just shove it into my layer of application like I used to I have to have a separate thing that gets invoked by a by a schedule at the
particular time every single day. And so that makes you start looking into EventBridge, you know, cron expressions and Lambda functions, and then start working from there to start maybe exposing to S3 for storage, maybe DynamoDB for databases. Okay.
I see. So I guess you're talking about a few customers who are kind of different customers, kind of having different adoption models. Do you have some examples, like success stories they can share of customers who are using Breath today on the website? It says that you guys have something like, what is it, 60, 36 billion messages handled with Breath in the last month. So that's obviously some pretty sizable customers are using it already.
Yeah, 36 billion Lambda invocations. I don't put Lambda invocations because it's kind of unclear what it is to new users. So I put requests or jobs, but this is Lambda invocations across all of Ref users, and this is refreshed live. I mean, there's a cache, obviously, but this is a metric coming from CloudWatch. And this is a metric I collect via an anonymous telemetry thing I implemented inside of Ref.
But yeah, basically the BREF is used at scale and across lots of companies. And the question usually I get is for what kind of project, what kind of applications is BREF or is Lambda and PHP, all PHP? Where is it good for? And I think the answer is not really in the project. And just to illustrate that, these are the examples I usually give.
So I have some very small projects like my blog or, you know, some small project I run on Lambda because it's really cheap. It's really simple and I push it and it's deployed and it's been like five years. I don't play it anymore. It just runs. It still runs. I used to use VPS, by the way, and then keeping up like
I had the VPS and I had an old PHP version. I deploy a new website. It doesn't work. I need to upgrade Apache. I need to upgrade the PHP versions. I need to make sure that, you know, MySQL is updated and then security. It's much simpler with Lambda for just small websites.
Then you have, I would say, the middle range where I have some companies that have 10 or 100 million requests a month. And even to a bigger scale, like one is Sua Musica. This is the biggest breadth website I know of. Sua Musica is a Brazilian website, like a Brazilian iTunes. It's a very popular website in Brazil, and they run with breadth.
And they have more than 1 billion HTTP requests handled with PHP and Graph every month. So it's a huge scale. And so these are some examples, you know, and even on the what do they do with it? It's very all over the place.
You have some that just run the API or the application with pref. Some transitioned only for the jobs, the background jobs, because the application runs well on the server and it's fine. It's just that dealing with spiky workloads for the jobs, the background jobs, it's not working on the server. So they move to
to only that part to sqs and lambda and it just flies because you can scale from zero to you know infinity almost um so some customers do that and um again some go all in with uh microservices i have one story of uh trezo which is a french company they have they're like a they're a bank and they're like a bank as a service for um neobanks so if you're like a
I'm trying to think of their customers. I don't think Revolut is one, but Lydia was one. And like a few players that are well-known in France, when I actually used, ran on Trezor. And so what they had at the time was a huge monolithic legacy PHP application and was running on servers. And so what they did is they migrated that application into a single Lambda function.
the whole thing, even though it's not a good pattern, but that's what they did. Obviously, they did that with lots of testing, like load balancer to switch traffic progressively and everything, but
Eventually, they had that old monolithic code base in one Lambda function for the HTTP API and it worked. And it actually improved their response time. It reduced the number of incidents. So they had an improvement just moving the legacy app into a single Lambda function. And then they had a second migration where they split the legacy app into different microservices.
Thanks to API Gateway, based on the path, it goes to a different API Gateway and then goes to a different Lambda function. So two-step migration, take the legacy app and put it on Lambda just to use the scaling and the simplicity of the hosting, and then split that up into microservices, and then they adopt. I'm not sure if they use SNS or EventBridge for communication, but yeah, then you can take advantage of everything AWS has to offer.
For them, it was like a great success that migration. But what I'm trying to say is that some just stop at the monolithic API. Some go even further. And you have benefits at all of these points, you know.
Yeah, I think that's a very popular pattern that I guess we're seeing more and more of. People want to use serverless or specifically Lambda because of all the scaling security and redundancy benefit you get, but they don't want to rewrite the entire application. So a Lambda lift approach is really good for that. But I think a lot of the
Maybe people in the serverless ecosystem, myself included, still hope people would go think more about the fine-grained Lambda function so that you have more fine-grained access control and monitoring and things like that. That's maybe a question, I guess, for you then. For folks who are running monoliths using, say, Breath,
What's the observability story like? Because that means you have one set of metrics for latency, for errors. How do you then debug, say, for example, one of the routes having errors and that's triggering the alert, but everything is coming from one route. What are some of the things that you do? Is there anything in the PHP ecosystem you can leverage to make that debugging easier?
Yeah, that's the part that is not great. It's the one thing where PHP hasn't caught up with the rest of the serverless ecosystem, I would say, in the sense that you have plenty of tools that exist or SaaS to monitor serverless applications. And some of them, most of them, I don't know, I don't know exactly, but they don't support PHP. So,
Yeah, this is not the best story, I would say. What I've seen people do is just use the API gateway metrics and logs from CloudWatch and try to get on with it. I'm working on an X-ray integration because I think this is deeply lacking for PHP. We don't have X-ray support for PHP. So you can forget about that. That's something I want to fix.
I have a better package. I use it in my own project and it's really awesome. I hadn't used X-Ray before because of that. So I'm discovering that X-Ray is pretty interesting. But yeah, I think you have New Relic, Datadog. It's a bit spread across. You have some profiling tools specifically for PHP. You have Tideways, you have Blackfire. And so they can provide...
like deep tracing for a single route or single invocation, you know. Some are more about, you know, aggregating by route. But I want to be honest, it's not as good, I think, as what exists mostly in Node.js and what you can get in Node.js. So, yeah, then I'm not sure if you split by route. With API Getaway, you could have observably, like you could see the average latency by HTTP route.
Yeah, you've enabled the detailed metrics that you can, but obviously that comes with additional cost, additional, you know, maybe dashboards will have more things to look at to find out, okay, if there's like an aggregate, there's a higher latency, which of the endpoint is responsible for that. But that's having...
So more detailed routes, you get more detailed metrics on the API gateway side of things at extra cost, of course. Yeah. Yeah. Most people I see, they don't go that route. They have a single latency feature.
They have a single endpoint, so they have a single latency metric. And then I see a lot of usage of Sentry, which is mostly used for errors, but it also has the tracing abilities. So that's not perfect, but it's honestly not bad being able to have the same behavior as X-Ray. And that's one thing. You have alternatives that are very similar to Sentry, but mostly error-focused like Bugsnag or stuff like that.
Okay, gotcha. And what about in terms of performance and cold start? Does the PHP cause some runtime? And then there's also the PHP, I guess, Laravel application itself. So do they introduce a significant amount of cold start? I don't know what the PHP cold start characteristic is like compared to something like Node or Python.
I'm trying to make sure I have the right numbers. But if you take just the PHP layer, so you have an hello world, something I didn't mention, we actually have three run times in Brave. One is just for running console commands. So let's ignore that because it's a really niche use case. So we have two run times. The one I showed in demo and the one we have been talking about since the beginning is the FPM run time. And so it runs with PHP FPM.
like the thing that you run on any server, so it's very similar to the serverless web adapter pattern. That's the one that is mostly used. And this one has a cold start of about 300 milliseconds.
when used as a custom runtime with a layer. And that's the base thing. And I think most of that time is starting the PHP process, but also downloading the zip file internally, Lambda downloads or starts up the runtime. There's some latency involved. So it's not that bad, but it's not the best. Obviously, it's not the fastest.
The second runtime is about just running the PHP code without FPM. And that one is mostly used for EventBridge, SQS, anything that doesn't involve HTTP. And this one has a cold start of 250 milliseconds. So it's slightly faster because it doesn't contain the FPM binary. It's just a bit smaller.
But however, that's like an empty application. If you take a default Laravel application, it's huge, especially because it contains the whole AWS SDK for all the APIs. I think it's more than 100 megabytes of code with a vendor like the node modules of PHP. So it's huge. And there's initialization involved if you don't pre-compile or pre-cache some stuff. So it can go as high as one second.
Usually, the default brief experience is you just deploy and it works because we compile stuff on the fly. But if you go to production and you want to optimize your cold starts, you can. You can pre-cache the routes and pre-cache the
I don't know, like the views and there are so many things you can pre-cache and it gets a bit better. I would say the average with a real application would be about 500 milliseconds. So it's something you have to be ready to accept that.
But yeah, most of the time you have to weigh the benefits and the downside of the cold start. So I guess in terms of cold start, it's very similar to what you get with Node, depending on the size of the application. But it's just a question of whether or not-- how big the impact, say, Laravel or Symfony has on your overall cold start time.
But I guess a lot of that nowadays kind of depends on your traffic pattern more than anything else because of the proactive initialization. You can save a lot of the cold starts or at least have them happen before the function serves a user request. So if your pattern is fairly stable, you're probably going to be okay. But if you are fairly spiky and predictable traffic pattern, then maybe you will see more cold starts.
So you probably need to spend more effort on making sure that when calls are happening, you're still okay as far as your SLA is concerned. And that's maybe, I'm not sure exactly how you do the pre-caching, how does that work? I'm not familiar with PHP. So is that like a flag that you can turn on while you're compiling your PHP application? It's actually not PHP specific. It's framework specific, like Symfony and Laravel, they both have their cache. And what I mean by that is they...
They pre-compile some of the views or the PHP config files into optimized PHP files. So it's all file-based. And so you can run a command before deploying, and it will just store the data in a more optimized fashion. Just for specific data structures like the config, the views, some mostly config stuff that can be pre-parse and optimized. So it's all file-based. It can be deployed in the archive that is being deployed.
And when you... Actually, yeah. Okay, go on. Sorry, I just wanted to ask you actually a question. So this is something I have an assumption about, but one advantage I see as well with having the monolithic applications, you have one Lambda function for the whole API, and it's not magic, obviously, but you're supposed to have, that's what I picture, less cold starts. Because...
you always have some traffic on the public website or on the API, which means that you are less likely to have some cold starts on some routes that are less, you know, they receive less traffic. So the way I see it is, yeah, the cold start will be worse with like a monolithic application, but at the same time, you will get less cold starts. I don't know if you have specific numbers of these, these,
Yeah, there are specific numbers that compares an application with the number of code starts running as a single monolith or as individual functions. But in most cases, if you've got some amount of traffic to sustain
and that's fairly stable across the entire API. If the same sort of traffic pattern is kind of mirrored on the different endpoints, then those endpoints will have the same sort of characteristics as far as, okay, you're not really going to see many cold starts because there's stable traffic. But if you have a case whereby for the whole API, there's a nice bell curve,
But because you have so many different routes and the use pattern is so different that some endpoints will have nothing and then bang and then bang. Then the situation could be that you see a lot more cold starts on some endpoints, but then on others, you're okay. So it really depends on kind of the traffic pattern. I think for most applications that I use, certainly,
The usage pattern is the same for pretty much everyone. You use Twitter, you come in, you read your feed, you interact. There's a few things that everyone does regularly so that the things that no one is really using
you might get worse cold starts. Things like no one really goes to the page to read the legal document, for example, or maybe no one really goes to the page to set the profile page, to update the profile image or something like that. So there may be a few endpoints where literally not many people use. So you kind of see, you know,
more cold starts, but because they are small percentage of the overall request count. So it probably evens out if you're measuring, say, the P99 latency, comparing the two.
But if you have an application that just in general don't have a lot of traffic. So if you're like, you know, a reasonably popular application, so bell curve will be enough to make sure that every single endpoint, all these popular ones, will have a bell curve as well. But if you're starting out, you're starting a new application, there's only like, you know, I don't know, hundreds of requests per hour. So your bell curve, you know, it's going to be...
It's going to be enough to sustain one or two instances of a LambdaLift, because there's always going to be a one request every couple of minutes. But then it's not going to be enough to sustain individual functions with individual endpoints, because it will mean that one endpoint will get one request every 20 minutes. And that will be a cold start every single time. So you probably need a baseline of amount of traffic
to sustain not having that many cold starts when you have a single response, one function per endpoint versus LambdaLive. So it is true when you have a really low throughput, say, API. But once you get to a modular amount of requests per day or per hour, the difference probably is going to disappear. Yeah, makes sense. Okay, interesting.
So I think that's everything I had in mind. Do you have anything else that you want to share? Any kind of like a future projects or things that you want to kind of tell us about, get people to sign up, like books, courses on the breath, maybe?
Well, this is the moment in the year, usually I wait every, I mean, I said I have only had two major versions of Bref, but I'm working on version three. So this is the moment where I brainstorm quite a lot and I'm trying to think about what's missing from Bref. From the runtime itself, I'm looking at upgrading, you know, on Lambda, there's the Amazon Linux version, so I'm trying to upgrade the Amazon Linux version, so it means
recompiling PHP and changing so much stuff that is under the hood is invisible and it's really ungrateful work, but it has to be done. That's something I'm working on. I'm also working on supporting the Lambda streaming response. It's something I really want to do, but it's implementing that both technically in the custom runtime and having that work with frameworks and providing a nice API for PHP users.
Getting it right takes a bit of time, so that's not simple. So yeah, I'm working towards Brevv3, trying to find as well, as I mentioned, a solution for Brev to transition from being serverless framework oriented to something that is a bit more
The main reason is serverless framework is changing. There's v4, which is a bit different in its open source model, in its commercial model. So finding a way to support and keep maintaining your serverless framework v3, supporting v4 as well, expanding a little bit with CDK and SAM and Terraform, like providing options to refusers.
And working on this final option, which again, I call Brave Cloud, which is a simpler configuration experience where you can very simply have the application, but also the database, the VPC, the caches, and have all of that with the UI to operate on.
Yeah, that's something that's taking up a lot of my time at the moment. And I'm really excited by the progress. This is really fun to work on these things. So it's still in beta. If anyone listening to this is interested, I would love to get some feedback on the product. But yeah, hopefully I will have something to show very soon. That's all I'm doing at the moment.
Okay, sounds good. So yeah, if anyone's listening and is using PHP and wants to explore using PHP with Lambda, then yeah, try out Breath and let Mathieu know your feedback so that you can help shape the future version of Breath. So yeah, Mathieu, thank you so much for coming on here and best of luck and hope to see you at the reInvent this year. Yeah, I hope so. I really hope so. I haven't booked any flight yet, but I really want to go.
Okay, sounds good. Perfect. I'll see you there. Okay, take care, guys. I'll see you guys next time. Thank you to Memento for supporting this episode. To learn more about their real-time data platform and how they can help you accelerate product development, go to gomemento.co slash theburningmonk for more information.
So that's it for another episode of Real World Serverless. To access the show notes, please go to realworldserverless.com. If you want to learn how to build production-ready serverless applications, please check out my upcoming courses at productionreadyserverless.com. And I'll see you guys next time.