Hi, welcome back to another episode of Real World Serverless. Today, I'm joined by Brian LaRue, who is one of the, I guess, one of the very first adopters of serverless. Hey, Brian, welcome to the show. Hey, it's good to be here. You're probably one of the first adopters of serverless. I came a little bit after. Yeah, I remember when you were in Amsterdam recently, we were having a coffee and you were saying how in the early days, that's about nine years ago, you were chatting with a
Was it the AJ who's recently left the Lambda team and they were saying to you that you're like a decade ahead of everybody else and you're like, this thing is not going to take that long. And then nine years later, here we are. Yeah, yeah. I remember in 2019, I was at Serverless Conf and Simon Wardley said, yeah, we're years away from this being a mainstream thing. And I was like, oh, I hope not. But at that point, we had already been quite a few years away.
I got interested in serverless when API Gateway was announced at, it was in San Francisco Summit in 2015. I saw them demo an HTTP call, return a Lambda payload, and it was like a light bulb went off. I was like, oh, I never need to use Beanstalk ever again. And I was like,
a revelatory moment for me. And I thought everybody else that was like, I just assumed that, that, you know, people would want to write code and outsource, uh, scaling. And I was super wrong about that. They, they clearly do want to write, take care of all the infrastructure concerns. Although I think it's changing now that the world's getting a little more, um, front end centric and, uh,
the sort of dogma of running your workloads is going away unless you're a DHH.
Yeah, let's talk about DHH a bit later. But I think what you said there in terms of, okay, 2019, Simon Waterley says we're still years away. And unfortunately, he's not wrong. But Simon Waterley was also probably the very first person who really started doing the whole serverless thing because his company, I think it was 2014, they were...
they had no business being a whole serverless compute business, but they had, again, a light bulb moment that, "Oh, yes, someone must want to use this because we're going to take care of all the infrastructure and you can just write your own code."
But it didn't take off. Nobody wanted it. I think that was, was it 2014 or even earlier than that? I remember he was talking about it. His stuff was earlier than that. Yeah. There's that funny saying where we didn't do it because we thought it would be hard. We did it because we thought it would be easy. And I think infrastructure is like that where, you know, how hard could it be? And
Turns out it's really hard, but it's easy to underestimate. And so I think there's a bit of that. There's also, this isn't like a new, like when mobile happened, we had a lot of
you know, reasons to like adopt this new, this new form factor and this new way of compute. And I think with Lambda, it was an incremental upgrade in many ways. And there's so much inertia for deploying servers. And there's also so much like prior art for deploying servers. So it, um,
Yeah, it takes a while to make these big elephants dance, these huge companies, and they're still coming to terms with it. I still believe very much, though, that in the future, probably most workloads will be managed by an infrastructure provider, and it'll be an outlier concern or maybe a cost thing at extreme scale. If you're at Google or Facebook size, then maybe it makes sense to start racking servers again, but
for the vast, vast majority of businesses, this really does not make sense for anybody to be doing. Yeah, absolutely. And I think, like you said, a lot of the momentum in the last couple of years has come from the front-end side of things. I think that's probably a lot of that maybe has been driven by Vercel and the positioning and the fact that the targets, very specifically the front-end developers who are
don't have that prior art, but more importantly, that prior identity with managing machines, that association with DevOps, or even the fact that maybe your job title is tied to you managing machines. If your business moves away from managing machines, then suddenly your job is potentially at risk.
So I think targeting that market is quite a smart move. But talking about targeting front-end developers, your company is also tapping into that particular space as well. Tell us a bit about that because I don't think many people have actually used it themselves.
Yeah. So we've got, my company's name is begin.com. We're a serverless deployment platform based on an open source framework called architect, which generates SAM based cloud formation. And yeah, it's a, it's a really easy way to get started. You don't need a credit card. It takes, you know, about five minutes to stand up your first workload and,
take seconds to deploy after that. We initially thought that, like I was saying, that most backend devs would want this instantaneous deployment and the sort of determinism of a cloud formation type deploy. But what we found, and Vercel definitely did nail this, the frontend developer market is what's growing and the largest cohort of new cloud adopters. And they don't care about servers. They care about getting their job done. And so...
We don't think the future belongs to React. We think it probably was really good 12 years ago, but these days it's a little long in the tooth and it's really trying to backwards math its way into the more modern performance techniques and it's failing. It's slow and it's okay to say that. You can run dev tools yourself and check that on any major site and you'll see that it's got pretty poor performance. And the main reason why is React does a lot of stuff the browser already does.
And so you're loading a ton of code that you don't need to load. So in backend, we like to call this undifferentiated heavy lifting. So nice way of saying doing work that you don't have to do.
Browsers have modules and component systems built in. So we've recently been working on a project called Enhance. You can see it at enhance.dev. And it's a front-end centric take on serverless development using web components as the primary primitive. It's a fun way to work. I think the future is probably going to be
more standards-based code and web components, instead of writing in proprietary dialects and transpiling into JavaScript and then executing that JavaScript to get HTML, you can just write HTML and it works really well. And yeah, so that's kind of been our take. And Enhance is built on our open source work architect, which is our CloudFormation generating tool.
I had a look at the architect a while back and it's very interesting. It's very opinionated, much more so than some of the other frameworks in the space. And I guess you've got all of these short hands for hooking up with DynamoDB tables and things like that. And if you just want to build some things and whatever you're doing aligns with the opinions that the framework has,
you can get things done a lot quicker compared to even things like server framework and the same, which
even like like also looking at the sam williams uh who's been on this podcast before he just did a post today about how he's moving from serverless to cdk and the first project he tried to port uh you know what was about 10 lines of code in server framework took into like 200 lines of code in cdk because it's just a lot more you know it's it's less opinionated so there's a lot more sort of
things you have to just kind of keep writing yourself. And even if you're creating reusable constructs, the first time you still have to do that yourself. There's a lot of decisions you have to make, whereas that 10 lines of serverless framework code may actually just be three lines in architect, because if you're doing something that is very much, okay, this is what we kind of give you out of the box, something that you can just configure very quickly. I think that's the power of... Yeah, go on.
Oh, I was going to say, yeah, it's funny because I would never have made the sort of opinionated claim. I think if we were to list out the opinions, we'd probably find they're just kind of things we all agree on. One of our opinions, for example, is that you should have lots of small functions because they'll cold start faster, they'll have better single responsibilities, they'll be easier to maintain and refactor and replace over time, easier to lock down to least privilege, etc.
So like it is opinionated, but they're, they're pretty good practices. They're mostly derived from Amazon's well-architected frameworks. And the other angle for, for architect and what it sort of sets it apart is it's very, I'm building a web app. That's kind of like the,
If you're building a log processing data pipeline thing, then there's better tools for that than architect. But if you're building a website that talks to DynamoDB, architect is going to make that real, real fast and easy. And that's kind of our,
our primary use case and the place that we cared the most. And the hack of architect is funny. It's very fast, but it's quite small. And the way we achieved that was by just subsetting which services we use. So I think in total we use like 12 or 14 services, not very many. And they're all serverless services. They all have generous free tiers. And
Because we subset, we've been able to make it really fast and really tight and really lightweight. Over the years, we've been able to mock out everything locally, so it runs locally. This is sometimes controversial, so don't worry if you don't like local development, you don't have to use it. It's just there if you want it. If you want it, it's pretty nice. It's a much quicker way to work, you'll have faster iterations.
But we totally recommend deploying to a staging stack and a production stack. And if you want to do your testing there, you can. But yeah, it's basically a transpiler at the end of the day. If you go to arc.code slash playground, you'll see these two panes. And the left pane is the architect syntax. And the right pane is the generated cloud formation. And architect tends to be something like 80 times less code.
then the generated CloudFormation. You can always dip into that CloudFormation too. So just like with CDK or anything else, you can modify the generated CloudFormation in any way you want. So anything that Amazon can do with CloudFormation, you can do with Architect. But for the majority of the use cases around building a web app, you should never have to ever see any CloudFormation and you probably don't want to. It does get pretty verbose.
Yeah, especially with things like API Gateway, the different layers of things that you have to configure. Yeah, but there's a lot of power in there. And so we don't want to completely hide that in a black box. And if you do need to dip in, and you probably will in a more sophisticated application, you're going to want to have access to that stuff. But yeah, it's funny because I think Sam...
And to an extent, serverless framework both want to be an abstraction that you would write by hand, but I would contend that's impossible. I don't think there's a human being alive that could build an application end-to-end by just writing CloudFormation out by hand. I think you'd have to copy and paste and test and throw darts all day. Whereas with architect, you can absolutely write an arc file in a few seconds and generate the CloudFormation, and it will be well-formed and ready to deploy quickly.
I think Ben Cahill is a big fan of just using CloudFormation out of the box. I've never actually worked on something that he's worked on. So I don't know how big his CloudFormation templates are. But Ben is a pretty clever man. So maybe he's able to make it work. Yeah. And I mean, I'm sure Amazon's working on tools for this. I mean, this is the attraction of CDKs that you get, you know, autocomplete and you get like,
you know, all the power of imperative language, you know, at your fingertips. And so it's a little bit easier to reason about, whereas cloud permission can be pretty unforgiving. It's very structured. It's very declarative and it's very verbose because you can do a lot of stuff. So it's just different approaches for the same thing. CDK gives you all the power of imperative language, but also all the foot guns as well. I'll probably see too many foot guns for me to truly love CDK. Yeah.
It's not for me either. Yeah, I don't. So I think for me anyways, personally, and this like there's lots of paths up the mountain. So when people are watching this, I want you to understand that there's no wrong way. Like, you know, build for the cloud. It's going to be fun. The whole purpose to me of infrastructure as code is to have deterministic repeatable deployments. I want to be able to make sure the thing I build works.
today is the same thing I build three days from now with a different computer and maybe a different team or whatever. And so we achieve that by checking in an artifact with our code. And it's very much like a packaged JSON file where we've got the version numbers and we know that that version of that code is going to work with this code. And so...
Imagine if in Node.js they decided to have package.js files and it was imperative code to do dependency resolution. It would never work. It would fall apart. The whole reason the package.json works is because it's declarative and it's very explicit. And if you just let anybody write anything, then it's going to fall apart at scale. And it's not going to be deterministic. It's going to be
you know, it's going to be pretty hard to know if it's going to work one day to the next. And so to me, imperative code, even if it generates cloud formation, especially if it's transpiled imperative code, that's just like not going to be as stable or as deterministic. And that's kind of defeating the whole purpose of infrastructure as code of being reliable and repeatable. So...
It's not for me. I totally acknowledge that it's a great way to get started and to move fast. And some companies are finding ways to make it work for them. And that's cool. I'm not saying that they're wrong or that they're doing something bad, but I think the trade-offs are pretty, pretty ugly between declarative and imperative code.
You also mentioned earlier about the trade-offs between having single-purpose functions versus LambdaLifts. I was talking to Hito not long ago. He obviously had a big hand in the whole architected framework for serverless. And he's actually told me that he's kind of switched sides. He's now more on the LambdaLift side.
And I have seen a lot of people now promoting LambdaLive as well. And as much as I'm still on the sort of single-purpose function side of things, but I think a lot of the arguments we've kind of pushed for in the past, things like better granularity when it comes to security, if it's all internal processes, I think quite a few people have probably rightly said, okay, it's not as important as security for external phasing side of things, like your Cognito and your API endpoints.
So even if you are being a little bit loose in terms of applying this privilege, so you got a LambdaLift that's got more permissions per endpoint, but your function itself is still following this privilege, it's probably okay. I mean, unless you're throwing stars on all the permissions.
And I think some people have also pointed out that a lot of the APIs, you may have five endpoints, but they have the same dependencies. So in terms of actually better costal time, you and I probably have seen examples. I've seen examples of a client where you've got an API, there's one React service rendering endpoint, and then there's a bunch of REST endpoints, and your REST endpoints are super slow just because you have to load React
as well. So for that example, it totally, yeah, single purpose function would not have that same performance issue. But at the same time, a lot of APIs, they do just talk to DynamoDB. So all of your Lambda functions in the same API just have the same dependencies. So whether it's a Lambda loop- I think that's the key. If they share a lot of code, maybe they can be... And the other thing is that we've found the last couple of years that things have gotten...
pretty fast. Like we've got proactive initializations now, we've just got way quicker runtimes. Amazon has done a lot of work to make AWS Lambda fast. And so back in 2015, if your function was bigger than five megs, it would cold start longer than sub-second. And that was pretty gross. Nowadays, AJ is measuring something like 50 megabytes.
So, I mean, you can get a lot of apps in there, but you know, I wouldn't put my React app in my SQS queue handler, for example. Like that just doesn't make any sense that that code's not shared there, but all my get handlers use my React code. So yeah, that makes sense. They should maybe be, uh, uh, bundled together in some form. So I, I think it is, I'm definitely relaxing it. Like we're, we're doing fatter functions these days. Um,
Yeah, for sure. Yeah, so am I. I'm also relaxing my stance on that as well. But I still think maybe the arguments that we've been thinking about traditionally has maybe not, well, at least it's less relevant now. But I think from the operational side of things, I do still find single-purpose functions going to give you a lot easier way to be notified when there's a problem.
because you've got one alert per function as opposed to one alert. And debugging. And then debugging by figuring out which code path is the problem. Is it everything? Is it one particular path? So having that... Refactoring is the other one. Exactly, yeah, refactoring. And also optimizing your CPU usage. I was helping a client not long ago trying to optimize their function, but then very quickly realized, okay,
"Wait, we can't really do this because this is a Lambda lift and we got four different endpoints, slightly different performance profile. We can't just try to optimize and find the right memory setting for all of them." So there's a couple of things where from the operational side of things, having multiple functions actually makes it a lot easier for the alerting, for the monitoring, for just checking your logs and figuring out what's wrong.
But the things that we thought was important may be less important now in terms of the performance and the security. Yeah, I think that's super true. And that's kind of the awesome thing about the cloud. I mean, it just keeps getting better. And we get these free upgrades over time. And so you got to definitely measure and challenge your assumptions constantly, I think. And this isn't...
I think some software communities get really like caught up in their identities and stuff. And it's okay to have an opinion and change it. In fact, that might mean like a mark of intelligence. If you're able to like change your opinion based on more information, that's a good thing. And so like, yeah, I think there's a lot of situations where a fatter function is pretty okay these days. I don't, I wouldn't,
Personally, I don't think it's a good idea to start there. I think you want to try and build discrete, separate, single responsibility kind of silos and work backwards from there. But yeah, the shared code is probably the big smell for me. If everything's sharing the same library, then possibly that is just one function and it's got different event sources. So that's a good way to look at it for sure.
Yeah, but I guess the single LambdaLibs with some existing web framework makes the testing a lot easier. At least it's easier to apply existing testing methodologies to serverless. And I think that's the thing that a lot of people struggle with.
When they first come to serverless, it's just how do I test my functions? Yeah. Testing the cloud. I think that's where you also talk about local testing with the local simulations. And again, that was something that I was quite, well, it wasn't so much about this local simulation or local testing per se, because I love it. I want that fast feedback loop. What I had problem was, was the tooling that's not available. Local stack was probably the big player there. And for a long time. Local stack's pretty good. Yeah.
yeah yeah yeah yeah but four years ago five years ago this was still very uh it was hard and like we do a fair bit of mocking uh and our tests will run in memory locally and that understandably makes people nervous they you know people are like oh i don't know like that's not testing the real thing amazon's
if they have a reputation for anything, they have a reputation for stability, uh, maybe to a fault. They don't change their APIs really ever. They just keep adding more and more of them. And, um,
You know, so you can mock them pretty safely and get away with it. But, you know, once in a while you do get burned by that. And it's a thing to watch out for. I think the lesson for me is use local testing for your fast feedback loops, but don't use it as an excuse to not test in staging or production for that matter. Like you got to do both.
And you can get going really quickly by working locally against mocks or simulators like Architect Sandbox. But at a certain point, you're going to want to verify that workload against the real deal. And you're probably going to want to verify that workload when it gets into production as well. Because there's just no way to know until you're up there what the real limits are and what the real challenges will be around those.
Yeah, I was going to say that a couple of years ago when I did try to use a local stack, it was pretty rough. It was either just before 1.0 or just after 1.0. So there's a lot of stability issues. There's a lot of different behaviors to the real thing. It fails in slightly different ways. You get false negatives. So it just, no, it was...
It was just a really hard thing to recommend to other people to do. So instead, I built this whole practice around using what I call remote code testing. So I run my code locally, talking to deployed resources, and I use temporary or ephemeral environments so that creating a new environment is quick and easy and tearing it down is quick as well to kind of make up for the
like a stable or good local simulation. But I think I had the Baltimore Hummer, who's the CTO for local stack on here last week. And yeah, like, you know, local stack 2.0 was super, super, super impressive. And they've done some really good things on the version 3 now as well. It's incredibly ambitious project. Like Amazon should be building this.
The fact that it's a couple of dudes in Austria is unbelievable. But it is good. And they are relying on the sort of same principles that we are, that Amazon's going to be stable. But they're doing everything. Whereas we emulate API Gateway, Lambda, Dynamo, S3,
That's about it. It's a pretty lightweight surface area for us, but for what they're doing, it's incredible. I mean, I don't know how many services there are now, but there's hundreds. Yeah.
They've got most of it. You're focusing on just building web APIs. They're focusing more on more broadly speaking, so serverless workloads. Yeah. So the remit is slightly different. And they're actually branching out now to other providers. So they are doing local simulation for Snowflake now as well.
And they started to do some fold injection in version 3, so you can simulate the dynamoDB throttling, which is something that I end up using in Vox a lot because it's hard to simulate the real thing. So they're actually branching out and covering more and more use cases and not just more and more services. But yeah, super impressed by what they're doing.
And they did some really interesting things as well. Before they were using, I think, DynamoDB local for the simulation. But then the DynamoDB local, I think the library went up to about 140 meg or something silly like that. It's big. Yeah, it's a bit too big. So they basically just rewrote the whole thing, which shrunk it down to, I think, a few, maybe tens of kilobytes or something like that.
And when he told me that, it actually reminded me of something you've done with the AWS Lite. You were telling me how, what was it, the internal AWS v4 signing module or something like that, which was tens of thousands of lines of code. And then you rewrote it and it was only like, I don't know, a couple hundred lines of code or something like that.
Yeah, we have a few examples of this. The biggest delta, so some context for everybody, we were part of the architect project, we're working on an AWS SDK replacement project.
And now everyone listening thinks I'm insane and probably right. So why would we do this? Well, AWS SDK v2 was not super fast and it was mostly for node, but it would it would actually builds for everything. You can use it in Dino, you can use it in browsers, you can use it with common JS, you can use it with ES modules. So it's it's covers a lot of space.
In the last couple of years, I think it was in 2021, Amazon announced v3, which is a rewrite in TypeScript. And they started encouraging people to move over. And last year, they basically said they're deprecating v2 and they're going to force an upgrade to v3.
V3 is a pretty big performance regression, unfortunately. We find it can be upwards of five times slower than AWS Lite, our client. And I'm not making up numbers here, by the way. If you go to awslight.org, you can see our performance benchmarking code on the homepage. We're working with Amazon on this stuff so that we keep ourselves honest. The data is pumped out, I think, three times a week.
based on the latest SDKs versus our stuff. And, um, largely this is written by my co-founder, Ryan block, not even largely, I think like 99.99% of it has been written by Ryan and the rest of us have been testing and doing plugins. And, um,
Yeah, we took a different approach. So Amazon is using a thing called Smithy and they're generating their SDKs. And not only are they machine generating them, but they're machine generating them for lots of different targets and they're machine generating a lot of test code and a lot of documentation code and types.
And all of that gets shipped to NPM. And so when you run NPM install, you get a pretty large amount of code on your disk that has to initialize in order to make an API call. And we couldn't afford a cold start that went over a second. And a lot of the V3 stuff was going to force that, especially if you use DynamoDB. To us, a user-facing Lambda function should be sub-second.
it would ideally be in the 200 millisecond range or less. And that's what we achieved with AWS Lite. We got a lot better performance. And so how do we do that? We hand wrote these plugins. So I think the most dramatic one is CloudFront. So under the hood, CloudFront is actually an XML API. This is kind of the fun thing about Amazon is each team
People probably know this, but like you hear about two pizza teams and that's actually real. They really are just small teams and these teams don't talk to each other. And they make their own technical decisions. And so some APIs are JSON, some are XML, some are both, believe it or not. So the CloudFront, and you can kind of carbon date these APIs and you can almost tell like what they use under the hood, but CloudFront is a lot of XML, a lot of nested XML.
And I believe Ryan wrote a recursive XML parser for our handling in AWS light. That is about 150 lines of code. And the Smithy generated version that AWS SDK uses is 20,000 lines of code. And so it doesn't really matter like how many bundle tricks you pull off that, that,
It's just going to be slower because it's more code to parse and eval and sit on disk. So AWS Lite gets around a lot of that by being just very deliberate. And similar to Architect, the reason it's so fast is that we get to pick and choose. We're not supporting all of Amazon. We're just supporting a subset that's serverless specifically. And we're not supporting every possible... We don't think you should make AWS API calls from a browser.
We think that's probably better to be done in the back end. You probably don't want to share your service tokens to the client. And because of that, we're not building for the browser, we're building for Node.js. And because of that, we're saving a ton of room because we don't need to worry about polyfilling Node.js stuff inside the browser. So it's kilobytes versus megabytes on disk, loads a lot faster.
But that doesn't make it perfect. It's got trade-offs. If you want to use it to talk to EC2, you're going to have to make raw API calls because we haven't built any plugins for EC2 and we probably won't. Yeah.
Yeah, I guess for a company the size of AWS, it's kind of surprising that they don't have dedicated teams for each SDK and language. It's crazy that they are using something like Smithy to auto-generate SDK clients when they're being used by tens of thousands of developers and everybody is paying for that performance hit.
Yeah, I, I feel, well, I think now that we've got some benchmarks that everybody can sort of see and work from, they've got some pretty clear targets to like, hopefully, you know, spackle punch, punch us out of existence and show us how it's done. The, the,
The reality is they do have a lot larger of a mandate than we do too. Like they're worried about all kinds of runtimes. They're worried about all kinds of services that we just don't care about. So it's a trickier proposition from their end. I agree. They should have dedicated people for each of these. I mean, it looks like they might even have their own JavaScript runtime soon too with LLRT. So...
maybe that's a part of this story. I don't know. Um, as a JavaScript developer, I'm excited because these are just all toys for me to play with anyways. So like I'm,
super cool with more than one thing existing. And now that we've kind of, well, now that my poor co-founder Ryan has suffered through creating a lot of this AWS light stuff, um, it's stable, it's good. We're using it internally and architect, it actually sped architect up a lot. Um, we're, we're happy and we're not going to break it and they'll never get slower. So, you know,
It'll be good for a very long time to come. And yeah, if something better comes along, we're not precious about things. We'll definitely use it.
Yeah, I had David Richardson on here, who's the creator of LRT a couple of weeks ago. And yeah, he was talking about a lot of things that you were talking about that, okay, you know, with Lambda, we've got this very specialized, this very constrained execution environment and people are doing certain things. So let's not build like a general purpose JavaScript runtime that's capable of doing all these other things. Let's just build something that is,
very much purpose built for this constrained environment, this environment where people are doing typically I/O heavy workloads, calling APIs, they're not doing super intensive work. So instead of having a JIT, they just don't have a JIT.
And so that allows them to co-start a lot faster, allows them to have to ship at one time, does much lighter. So exactly what you were saying that, okay, let's not build for everybody. And then so nobody is happy. Let's build for a small subset of users who are more relevant for our use case here and who are going to have a
get a much better experience using Elipas Lite. And hopefully, Elipas will start taking some cues from that and do something similar to what they're doing now with LRT to have a more language-focused sort of lens and maybe, you know, they keep talking about being customer-assessed. This is the kind of thing they should be obsessed about for customers, right? Yeah.
It's interesting because I think we also see, so like, I bet you 99% of architect apps are making a call to DynamoDB and returning some HTML. Like I bet 99% of them are doing that. And it's increasingly feeling like maybe even running anything in the middle there is actually not a great use of time. Like when I look at direct functionless integrations, like things we're seeing with the step functions, like,
There's a, there's something to this world where we have logic lists. Like I make a call to Dynamo and maybe I have some helper that turns that into HTML, but it'd just be like a tiny little bit of code where it gets some state and it transforms it. And then it passes it straight back through. So it almost feels like we've, we've, we're, we're just about there, but the programming state of the art right now is still very much. I'm writing, I'm writing JavaScript and I'm going to go talk to a database and then I'm going to
over the rows and I'm going to turn those into HTML somehow. And I'm going to return that and maybe I'll return a caching header too. And I feel like there's, there's a lot of duplicative work there that, that might just go away completely. But this could be a while for people. If people think serverless is crazy, they're going to think functionless is even crazier. So, yeah.
Yeah, I mean, I love using direct integrations. When I write AppSync APIs, I really use Lambda functions. Most of the time it's AppSync to DynamoDB directly. It's faster, cheaper. Yeah.
But I'm not a fan of the term functionless because I feel it puts the focus on the wrong thing. It puts the focus on removing lambda functions as opposed to, well, have a function if you need to. It's more about what's best for your use case as opposed to trying to make it more ideological. So I'm not a big fan of the term, but the approach, absolutely. I think the approach is definitely the right one to take.
But if serverless is still new, this is way out there. Like we're way in the future now, but it does feel like there's a there waiting to happen at some point. Yeah.
So if, like you said, most of your customers are doing the same thing, iterating over some data they get from DynamoDB, what about some kind of framework that sits on top of Begin or something like that, that basically just do that? You provide a small transformation function that transforms individual roles from DynamoDB and that's the only thing you ship?
Yeah, maybe. Lately, the fun thing that we've been exploring the web component space a bunch lately, and one of my devs on our team, Ryan Bethel, did an experiment where he took our web component renderer and he put it in Wasm. And he's using QuickJS, same as LLT.
And it works. And he's managed to port this now to every backend runtime. So we've got it running in Ruby and WordPress and Python. So you can write a web component, like my element extends HTML element,
And you can run it in WordPress and it will return the HTML for it, server-side rendered. And if you want, then you can run custom element define on the client and hydrate and have your client JavaScript. So it's like isomorphic, but backend agnostic.
And I don't know what any of this means. It's the first time I've seen it. This is something not even React can do. Like you, you know, you usually run a build step before you put your React code somewhere to talk to a Ruby or a Python. But now we have with Wasm the ability for Wasm to execute JavaScript directly. And that's like, that's new and unexplored and interesting. There is definitely, so the first thing we did was we were handy at JavaScript to generate HTML.
But very quickly we realized, oh, we can't just hand it JavaScript and render HTML. We've got to hand it JavaScript and state to render HTML, like the rows of the Dynamo that we need to loop over or whatever. So there's something to that where it's like handing a Wasm runtime a renderer of some kind and some state and then getting back an HTML string.
I'm pretty excited about this. This means you could have a design system that works on your blog that's WordPress, but also you use in your product that's running Java or .NET or whatever. So JavaScript really is starting to run everywhere and become the glue layer. And maybe that's the direct integration. I don't know. I don't know. And I look at EventBridge and AppSync and
and the stuff going on with step functions. And there's a whole bunch of use cases there for just doing direct. There's no JavaScript involved. We're just writing inference code effectively to create Glue. So yeah, this is an exciting time. So to take it back a notch, Web Components, a lot of people that listen to this are probably not front-end developers. What's Web Components, and how does it differ from, say, single-page applications?
Yeah, sure. So web components are a way to create custom elements. So instead of form HTML element, you could have my form and you can extend it with any events in the HTML that you want. And so web components are just a way to extend the built-in browser elements.
And that's exciting because typically this has been done with framework code in the past, like React or Angular or Vue or Svelte. But the problem with React or Angular or Vue or Svelte is that you're writing code for their abstraction and then you're transpiling it into JavaScript. And then you're executing that JavaScript to get HTML. Whereas with web components, you extend HTML element.
And then it runs in the browser and you have a new element. That's it. There's no middle step. There's no transpiling. There's no running. It just is built in. So it's a platform native way to extend HTML. So in that case, how do you tell the browser what to do with my element?
Yeah, there's a call you can make called, it's called the registry. It's called custom element dot define. And you pass it a class in the name of the elements that you want to run. Now, the funny thing is, it's a progressive enhancement step. And it's optional, you don't need to run that if you're not listening to anything. So a good example would be, I might have a header on my website, and it's got a bunch of links.
that header probably doesn't need client side JavaScript. And so like you could make that into a custom element and you can have like, you know, my header and my header could have a bunch of anchor tags. Um, but it doesn't need client JavaScript, so you don't need to run anything else. Just pop that in. It's good to go. And, uh, the reason you want that is it's a little more semantic. It's easier to scope your styles. You can opt into the shadow DOM and you can completely encapsulate it if you want or not. And, um,
Yeah, it's native to the browser. In a way, it kind of sounds like CloudFormation providers. Yeah, I can see a fractally world where all this is very similar stuff. And one of the weirder ideas that we've had lately was like, well, what if we had some custom elements that didn't have UI? Like, what if you had a custom element called WebSocket?
And one of the attributes was the endpoint. And so it took care of all the WebSocket stuff, but all you have to know as a user is to write the WebSocket tag. Maybe dating myself here, but Flash, Flex used to have a lot of these kinds of components where you would have data components that you wrote in a language called MXML. It's a really similar concept, actually.
Yeah, it's a shame. Time is a flat circle. We just keep coming right back to the same ideas. Yeah, it's a shame. Flash was actually a pretty good technology. There's some problems with it, but you were able to do so much more with Flash than suddenly you weren't able to because Apple killed it. Yeah, that was a dramatic moment. It was probably the biggest industry rug pill to date, and it screwed a ton of people's careers over and
It made a lot of consultants money, though, for sure, because everybody had to rewrite their Flash app into an iOS and Android app. So that became a whole thing. It's a cautionary tale for the web. You got to watch out for proprietary abstractions. I sometimes think React is going to learn that lesson the hard way.
I've got another question, I guess, around providing some kind of a high level of abstraction. So you talked about potentially you can do this for a lot of the, I guess, web components kind of work. But we were talking about earlier about how we can maybe introduce something like that to begin. Have you looked at what the German daily is doing with AMPT?
Because it feels like what they're doing is kind of trying to provide that higher level of abstraction in terms of, you know, I saw some examples of writing AI chatbots kind of thing that provide some really nice abstraction so that you can just do your thing very quickly, connect a few things, just say what you want. And it creates all of the resources behind the scenes.
Yeah, I think this, if we were to like draw a circle around the afflictions that serverless people share, it's that we all live in the future. And Jeremy Daly might even be way outside that circle. Like he's living way, way in the future. So him and Emrah are working on Amped and the idea there is infra from code. At least I think they're still saying that. I don't know if they're still saying that, but the idea is, okay, we have infra as code. That's good. It's very explicit. You know, like if I write,
you know, my API gateway stuff into CloudFormation, I'm going to get an API gateway. Their assertion is my code already has that information. So if I have an API in my code, figure that out for me and create the API gateway. And they have taken this pretty far. So the ideas that they are exploring, I think it's pretty brilliant. You
You just write code and you give them the code and they'll figure out whether it should be running in Lambda or a Fargate, if it should be an API gateway or a queue. They'll set up a DynamoDB table for you at the gate. So you just have to write code that reads and writes from the table. It's really clever. And I think from getting to zero to 60, this is going to be like probably your fastest path into getting into the cloud. Where...
And I'm certain they're not entirely sure. Like when you start drawing outside the lines of their abstractions, you're going to need to drop into either CloudFormation directly or figure that out yourself, which is going to be a pretty cold shower. So I don't know also how you would infer some things that are better explicit. And so as an example, if I needed more than one database table and I had one called users and then I have one called like, I don't know,
user's addresses, but I typoed it and I accidentally put user's address. What happens? Do I get two tables? Do you drop a table? Does it fail and warn me? Those are pretty important questions to answer. And the other one would be like, how does it know what memory to use and what disk and CPU and all that? We can put pretty smart defaults, but as apps scale,
they almost inevitably always blow out their limits. And so you almost inevitably need to make a support request, be like, I need more, whatever I need more DNS records. And so like, how does it deal with elasticity and understanding explicit quotas and limits is, is another challenge, but I don't think they're impossible challenges. And I really admire that they they've spun this out of the serverless framework and they're, they're trying to answer these questions. I mean, this is, this is bleeding edge stuff here. And yeah,
really is a complete refutation of the CloudFormation approach, which is very explicit.
If you have any infra concerns, CloudFormation, you're going to get that in that document. But boy, that document is going to get big. And it's not going to be a human readable artifact. It's going to be a multi-person job to understand and maintain that artifact. It's a part of your code base. And what they're doing is really making all that disappear. And you're just focused on your code, which I like a lot. And I feel is very much in the spirit of serverless.
Yeah, but I don't know, it feels like really maybe too drastically different from what people are used to. It may be quite hard to sell, especially with the fact that, like you said, there's a lot of uncertainties about what happens in certain situations. If I had a typo, decide, you know, accidentally delete a line of code, this is drop my database and, um,
things like that, which I think is going to be quite hard to convince at least enterprise serious players to try out
And also, I guess I'm slightly dubious of the really use case-centric approach because like I said, I mean, right now everyone's building chatbots. Tomorrow, nobody's going to be building chatbots. So suddenly, all of these abstractions that makes building that one thing really easy, it just became, I guess, irrelevant really quickly. It almost feels too seasonal, too fashionable. Yeah.
So this is the challenge with the past thing. I mean, like we've been struggling with this with Begin for years where like sometimes we're like, oh, we need to be more front end. Other times we're like, maybe that's not that important. Like we or which front end? When I first started Begin, I was told under no certain terms that Gatsby was a thing.
And people might not remember Gatsby now because it's been gone for a couple of years. So like, I don't know, like really either where the abstractions like truly are. And when, when the rubber hits the road and you build, you know, out your application, if your company is lucky enough to be successful, it's probably going to be a mess no matter what. And you're going to be kind of serverless, kind of not, and you're going to have a service over here and over in this cloud, and it's going to be glued together with duct tape and wishes and, uh,
Yeah, it's kind of the fun of it, though. I admire that they're taking a real true swing at this philosophy because I think they're pretty unique in this way. Almost everybody else is either going to pretend that AI can generate code or be mega explicit. And our approach is explicit but terse. So maybe we're like a Python of...
In phrase code, we're like, you know, be as varied, like write as little as possible, but be explicit and be declarative. Whereas the CDK thing kind of happened and that's like turning out to be extremely verbose and brittle. So this is somewhere in the middle of that. It's not verbose, but hard to know if it's going to scale out in any, in which direction. But I bet they have cool answers for this. Like I know he wouldn't drop my database table.
But what does it do? Does it fail? Like, you know, give me a meaningful error? Like, hey, I thought you meant user addresses. So I did that. There's a million ways to solve that and another million ways that could go wrong. So it'll be interesting to see how that all plays out.
Yeah, there's also a wing lang as well. Yeah. Well, there's dark lang, there's wing lang. I think dark lang just pivoted. I saw this announcement recently. I forgot what it said now, but I think they did. Yeah, that's it. Didn't even know, I guessed. I mean, of course they did. Probably. Yeah.
So speaking about the beginning and for you, what's next? Because I think when we spoke before, you said you are quite happy with AWS Lite. You're not going to make too much changes. So are you going to be focusing on the enhanced or dev going forward? Yeah, we're really... Well, AWS Lite is still getting lots of updates.
We just redid the retry logic a little while ago and added new jitter, I think was, it's pretty boring, but stuff that you need. And then I feel like there was some, oh, more credential providers is coming. So we didn't do, I can't remember the name of the EC2 one. We really just cared about Lambda. And so what we shipped with was for that, but we're adding more credential providers to AWS Lite.
I feel like there's other updates, but I can't remember. Enhance is definitely our bigger focus. We know that where all the growth is going to come from in the cloud, it's going to be front-end web developers moving their workloads into places like Amazon. And we feel that web components is the right kind of place for them to land. And
Yeah, we're pretty interested in this Wasm thing. It really only has been the last few weeks that we've been working on that. And I kind of, I feel like front end kind of got out ahead of itself and became very node focused. And meanwhile, there's still millions of developers using Python and PHP and Ruby, and they kind of got left behind. And or worse, you've got a
pre-render or pre-build your application and then talk to your Ruby or whatever with an API, which is a nice way of saying you're gonna have skeleton screens and spinners all over your UI, which is a bad user experience. And so being able to server render JavaScript web components inside those environments is a pretty compelling advance. So we're gonna put, we just got our WordPress stuff working
which is crazy, by the way. That's a whole world. I had no idea how WordPress grew as much as it did. But it has a thing called Block Editor and its own way of putting together components into a page. So we fully support that now with Enhance. And I think Rails is our next-- Rails or Django. And we started looking at Spring Framework, which I can't believe is still a thing.
and kind of dates me, but Spring has a, it's called Timeleaf is their templating solution. And so we're looking to plug Enhance into that. So you would be able to ostensibly write a design system with web components and then reuse it across all of these different backend runtimes. And that's pretty compelling. I think 99% of AWS Lambda users are running either Node or Python.
I think they're about all the sort of survey results I've seen is very, very heavily skewed towards Python and JavaScript. Yeah. Yeah. So we definitely have JavaScript cornered. We're happy with that. That's fine and works good. So yeah, Python will probably be up there too. Although we're looking at these other server environments for making enhanced run because there are still a lot of workloads that aren't serverless.
And actually, as Lambda improves, I'm sure we're going to be able to run WordPress in a Lambda function at some point anyways. So not that I think that's a good idea, but hey, why not?
Yeah, my blog actually runs on WordPress, but it's using this thing called Shifter. So they give you a WordPress instance, like a Docker instance, when you are writing. And when you're done, you basically have this like a deploy step that compiles your WordPress site into static. Static site? Yeah. That's cool.
And so you get all the writing tools that come with WordPress, but you get a performance of a static site. So yeah, we kind of give you a nice, well, some of the best of both worlds, minus some of the plugins that relies on the runtime API cores and things like that.
Yeah, there's this primitive in web components called Shadow DOM, and it sounds a lot cooler than it is. So the idea of Shadow DOM is it gives you like a document within the document and it's completely isolated. So styles can't leak out and code can't leak out. And I don't personally have a lot of use cases for that when I'm building an application.
But as soon as I saw these WordPress installs with all these plugins running, I was like, oh, that's what Shadow DOM's for. Like, you know, because it's Wild West. Like, you could just install anything and it could do anything. And it seems to be the case that people do that. Yeah, bad idea. Anyway, so Shadow DOM would be a good solution for that because you'd be able to, like, isolate these pieces.
Okay, so I think that's everything I've got. Thank you so much, Brian, for taking the time to talk to us today. Is there anything that you want to kind of leave us with before we go? Yeah, check out awslight.org and join our Discord if you find any bugs and please let me know or bug me on Twitter or Mastodon. I'm more on Mastodon these days and I'll share the link with that. It's Brian LaRue at IndieWebSocial.
Interesting. Master Don, last time I tried, the studio just feels really quiet. There doesn't seem to be much activities there. How are you finding it? You've got to follow lots of people. I love it. I mean, there's no ads, so it is different. You only see what you follow, but you can follow hashtags. And there's actually an okay little AWS community growing there. Okay.
They just hit 15 million users, so it's still very small compared to Twitter or LinkedIn, weirdly, is pretty popular these days. But yeah, it's cool. I like it. It's less noisy, but I think that's part of the attraction. And you only see what you want to see. There's no algorithm feed or anything, so you have to follow lots of hashtags and stuff to get it really...
get the wheels pumping. But once you get it going, it is quite lovely because it's very less markety. It's kind of more people sharing neat ideas, which reminded me of like early Twitter a lot, actually. Right. Yeah. I guess they're also less crazy people shouting at you with no serverless.
Yeah, there's not a lot of that. Actually, it does happen. Yeah. There's still some haters for sure. Everywhere. Yes. Yeah. Okay. Yeah, again, thanks so much, Brian. I guess I might be seeing you on Mastodon soon. Yeah, cool. Thanks, John. Take care, guys. Okay, bye-bye.
So that's it for another episode of Real World Serverless. To access the show notes, please go to realworldserverless.com. If you want to learn how to build production-ready serverless applications, please check out my upcoming courses at productionreadyserverless.com. And I'll see you guys next time.