We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode #104: Baseline, is this new serverless development framework better than Amplify?

#104: Baseline, is this new serverless development framework better than Amplify?

2024/7/16
logo of podcast Real World Serverless with theburningmonk

Real World Serverless with theburningmonk

AI Deep Dive AI Chapters Transcript
People
T
Thomas Nixon
Topics
Thomas Nixon: 我是Baseline的技术联合创始人,拥有软件开发的丰富经验,从承包商到咨询顾问都做过。Baseline是一个最近开源的框架,旨在帮助并促进无服务器社区的发展,并推动无服务器技术的进步。我们希望通过Baseline,解决无服务器技术采用过程中遇到的许多挑战,例如环境设置、工具选择、代码部署以及成本控制等。在过去的六年里,我们积累了丰富的经验,并将其融入到Baseline中,使它成为一个开箱即用的解决方案。我们希望开发者能够轻松上手,并能够快速构建可靠的无服务器应用程序。我们采用了Lambda和DynamoDB的组合,构建类似单体架构的JSON API,并基于实体的复合架构来减少冷启动,缩小包大小并提高可靠性。我们还提供了开箱即用的身份验证和中间件,以及CloudFront和S3的集成,以简化静态网站的部署。我们使用PNPM和ESBuild来提高开发效率,并使用serverless framework和serverless offline plugin来支持本地开发。我们还提供了一种简便的方法来管理环境变量,并尽可能地减少开发者与AWS控制台的交互。我们希望Baseline能够成为一个低风险的方式,帮助企业进行云现代化转型。 Jan: 在与Thomas的对话中,我了解到Baseline框架旨在简化无服务器应用的开发和部署流程。它提供开箱即用的功能,例如身份验证和数据库集成,并允许开发者使用熟悉的Express.js风格的代码来构建函数。同时,Baseline也解决了无服务器开发中的一些常见问题,例如冷启动和环境配置。与Amplify相比,Baseline最大的优势在于它完全开源,开发者可以完全控制代码和基础设施。这使得Baseline更加灵活和可定制,也更容易进行调试和维护。然而,Baseline也有一些局限性,例如目前尚未内置测试功能,以及对本地模拟的局限性。尽管如此,Baseline仍然是一个很有前景的无服务器开发框架,它有潜力降低无服务器技术的入门门槛,并帮助开发者更高效地构建应用。

Deep Dive

Chapters
Thomas Nixon, CTO of Baseline, discusses the challenges of serverless adoption and introduces Baseline.js, a new framework designed to simplify serverless development. He highlights common patterns in successful and unsuccessful serverless projects, emphasizing the importance of reliable deployments and a clear understanding of the serverless ecosystem.
  • Baseline.js is an open-source framework designed to simplify serverless development.
  • Common challenges in serverless adoption include connecting to databases, scaling, environment variables, and local development.
  • Baseline.js aims to provide a baseline project structure and tooling to address these challenges.
  • The framework incorporates learnings from six years of serverless development experience.
  • Key features include support for JSON APIs, simplified authentication, and optimized deployments.

Shownotes Transcript

Translations:
中文

Support for this episode comes from HookDeck. Level up your event-driven architecture with this fully serverless event gateway. To learn more, go to hookdeck.com slash theburningmonk. Hi, welcome back to another episode of Real World Serverless. Today, I'm joined by Thomas Nixon, who is the technical co-founder at the Baseline out of Australia. Hey, man, good to see you. Hi, Jan. Thanks for having me on.

I was talking to your co-founder Ken a little while back about some of the stuff that you guys have been working on with serverless. And I know from talking to others previously that Australia has got this hotbed of serverless-focused startups, and there's a lot of activities happening over there. I think some of the first serverless days conferences, sorry, serverless conf was happening over there as well.

And Ken mentioned that before you guys started the whole Baseline project, you were working together at Devica and you were doing a lot of work with various clients, helping them building serverless solutions in Australia. And you came across many interesting challenges. So I guess before we get into it, do you want to just quickly introduce yourself and talk about what you do?

Yeah, for sure. Thanks. So I'm Thomas, the technical co-founder of Baseline. I have a background in software development. I've done everything from contracting, product work to consulting. And I've seen it all. I've seen projects that succeed, projects that fail, projects that run a bit over, projects

I've been the junior developer, I've been the senior developer, I've been the project lead, and I've had a lot of experience in all of that realm. And Baseline is kind of like a culmination of all that experience and what we have seen and developed with Baseline. So Baseline is a recently open-sourced framework, and we're hoping to help and add to the serverless community and really drive forward what we can do with serverless.

Yeah, so I guess talking about some of the things that you mentioned there, have you identified some common patterns for companies that succeed and common patterns or common traits that companies that doesn't succeed or they fail when it comes to adopting serverless?

Yeah. So even for us, the adoption of Sovalus was a whole journey in itself. Personally as well, I came from a company before I worked with Ken where it was a lot of older architecture working with

containers or VMs and really large scale deployments that were kind of awkward to work with. And then coming and working at an agency and building a lot of serverless first applications, there was a lot of challenges and change in the way how you think. I like to think of it as there was a lot of fears or...

unknowns that you had like there's a very emotional difference in approaching serverless because you suddenly went from knowing all the environment around you being able to tweak it change it pull the different levers to this world of serverless where you don't quite know how to make it run faster or scale it more or even how different things connect so huge parts of adoption were

How do I get it working? It sounds simple enough, but a hello world application isn't enough to go, hey, I can push this to production now. There's a lot of stuff that goes in between, like connecting to the database, making sure it scales, environment variables, running it locally, all that kind of stuff. And we found...

it was really tricky to get things set up and we found a lot of projects would often have different structures different tooling different versions of things and it made it very difficult to push stuff out reliably and if you can't push stuff out reliably you can't trust your pipeline you can't trust the application you don't want to be waking up at 3am and dealing with an outage or anything like that um so we really wanted to lean into serverless but

we found that just doing serverless alone wasn't enough to have the benefits of serverless we made a lot of mistakes and a lot of learnings along the way where just using the things that sound like they're the right things to use doesn't always lead to result that you're doing it right um we we stumbled on a number of things from auth database scaling uh

packaging lambdas you find out that 30 megabyte limit or 50 megabyte is it now and you hit that and it's it's not a good time um even just how developers run their code locally like do you run it in the local cloud all these questions they just keep coming up there and there's endless questions and they're very hard to answer if you don't have the experience and

the environment's changing really quickly. So keeping up with which tools to use, whether it's CDK or SAM or serverless framework, can be really challenging. So we really wanted to create a baseline of what a project was, and that's kind of where Baseline came in. We wanted everything connected, ready to go, out of the box, and have all of the learnings that we've figured out over the past six years of building Baseline.

and baking them into something we can reuse on every single project. And so we made mistakes like the price of using Secret Manager over SSM.

and things like that where it was like hey we're spending 70 a month on secrets that we don't even use sometimes um or even just how to deploy the code in a nice structure so there's a lot of technical hurdles there um once you get it all together works fantastic we had great success but there was a lot of hurdles in that in that adoption of serverless because it's such a wide ecosystem i don't think anyone has

complete answers to anything at the moment either. There's no go-to for everything. You have lots of different solutions, but you have to figure out how they all tie together. And trial and error is the biggest, most time-consuming part of development. And we were doing that with a lot of different things. So coming into serverless,

It was fantastic to start to forget about things like having to maintain a server or even does the database scale with the compute with using Dynamo and Lambda or even SQL injection at that point. It became...

really easy to develop things, but you had to deal with new problems. And dealing with those new problems, they're not always solved in a way that makes sense yet, or you've used the wrong tool for the wrong job. So we learned a lot over this process, and I think things have got better over the past six years as well. Like the whole serverless ecosystem has pushed forward so much, and the tooling has got so much better.

But it still has a lot of those core questions around how do I get started and how do I do this at scale and how do I do this reliably and trust what I can build. Yes, even as you mentioned there, Lambda's upload limit for its artifacts, depending on how you do it, if you're calling the API, it's 50 meg. If you're uploading S3 file, it's 250. If you're using container images, it's 10 gig. And the Lambda itself has over the years has gotten more and more

config settings, things that you can use to address specific problems. But again, sometimes they're not advertised as such. Things like Lambda Layer, for example, especially with the Lambdas, some of the advocacy teams in the AWS often push you to use Lambda Layer to share packages for sharing common content.

shared code between projects, between functions, as opposed to using a package manager like NPM. And I found that Lambda layers is oftentimes one of those foot guns that people get passed around. They get told to use it and then they realize, oh, it's actually really bad for sharing code.

And yeah, so it's totally understand, especially when, like I said, serverless is such a big ecosystem. It's not just about Lambda. There's also all these services you often use together. You have to learn how each one of them work individually, but also how they interface with Lambda as well based on the

behavior you get from the event source mapping, the concurrency controls, the error handling behavior. So lots of things you have to learn. But I mean, I do think that once you learn them, it can be really, really productive and do things really quickly. But getting to that point where you are proficient and making the right choices more often than not is not easy. And that's something that I've been

focusing on for a long time to sort of address, but for more for the education angle, you know, I've got workshops, I've got courses, I teach people how to do all these things. And I'm quite vocal on social media, but it's not easy for someone who's just starting out and trying to figure out all of these things. So I'm quite like the fact that, you know, what you guys are doing, but also a few other things.

I guess, similar attempt to just package things up in a way that's more, I guess, more user-friendly so that it makes smart choices. Things like what Jeremy Daly is doing with AMPT and what I'm seeing with the Wing Lang as well, providing a higher vocabulary so that people can get those best practices out of the box as opposed to having to learn them and implement them every single time they want to do something.

And so I guess in that case, what about some of the, have you seen, I guess back to my previous question, have you seen any sort of common patterns that help you succeed with serverless adoption? - Yeah, for sure. I like your point as well about, we're trying to make it easier to get started with. Developers are really smart people. They can solve problems, they can figure things out really quickly.

I find when they say you have to go do a six-month AWS course, you don't learn as quickly as if you have something you can poke and prod. You can learn really quickly if you have an example of how to do something, and then you can start modifying it to how you need it. And so it cuts through all those defaults and things like that. But we also need great content as well.

So yeah, the common patterns we see is using Lambda and DynamoDB together. That's been a really successful thing for us. Also, we tried GraphQL. We really tried GraphQL. Fantastic technology, especially when used for the right thing. We ended up getting rid of it.

We found that a JSON API was far more effective. Our focus really is on startups and getting up and started and developing features quickly rather than say large enterprise based systems or gluing together heaps of APIs. So we've gone for a pattern of JSON APIs, but then we've gone and built it in a way that it still kind of feels like you're working with a monolith.

Because we really wanted to tap into the existing hiring market. So anyone who knows Express or React or even the serverless AWS stuff, we wanted them to be able to leverage that knowledge and be able to work successfully with Baseline without having to, say, learn a whole new thing.

So if you work on it, it feels very much like just working on an express server. We've taken care of the abstractions. We've made it really simple to add to it. And you're just building routes and hooking into things that exist already. The real magic sort of happens when you package it. So we split it up into the different lambdas based on the entities, essentially. So we found that's been a really effective pattern because you get

less cold starts, the codes more localized, the package sizes are smaller and stay smaller. And then anything that has a dependency like integrations with say, I don't know, zero or Stripe, they remain in only those endpoints that they rely on. So we've really been using a composite sort of architecture on the backend there.

And we found that very useful for building things very quickly. If you have specific endpoints you need to add, the developers can quickly add them and keep it simple. And we've wrapped things like auth, so it's already out of the box. So Cognito is set up. You have the current user in the request payload. You can reference it easily. You can check if they're an admin with middleware. And we've made that stuff really easy to add to as well. So...

That's been a great pattern for us is using that. And we do that on every project and then using CloudFront and S3 for static sites, for SPAs, for React has been really effective. Getting that config though, that took a long time. I don't know if this is something everyone else has struggled with, but it took us years to get the setup right for CloudFormation. It took

It was just very difficult to be cost-effective, scalable, caching, and then work really well with React and S3 setup. I noticed there was a lot of conflicting ways to do this sort of thing. So using that has been great for us. You can have 100,000 visits a month and it'll cost you, what, 58 cents a month because of Route 53 default cost.

So we found that very effective. And then we flipped between different technologies to try them out and see what was effective. So originally we started off using Yarn for package management. We stripped it all the way back to NPM after Yarn went a bit of a different direction with Yarn 2 and 3. And now we've moved to PNPM, which has been really effective and sped

The speed is just fantastic, as well as then using ESBuild as well. ESBuild has just made hot reloads faster, building the code, shipping it and just running it just really effective. Since we are using the serverless framework and the serverless offline plugin, running it locally has been a lifesaver.

We found it was pretty hard to get started with running things locally, just in like a whole application sense. I know you can invoke things individually, but most of the time you want to test a whole application and you don't always want to deploy it into a cloud environment, especially if you're the lead on the project and you're doing code review, you're switching between different PRs with different infrastructure, pushing that into your own personal environment.

AWS account really doesn't work at that level because there might be too many changes to update that infrastructure. So running it locally, really quick feedback loops, be able to change it quickly and have a fairly consistent environment to the cloud has been a game changer for us.

originally we didn't have that as much and we were invoking things individually and a lot of the time you'd see stuff that you could have caught earlier if you had just run it in the full application um so having that work out of the box and having a structure for that has been really important for us

Okay. So I guess in that case, do you have some, because it's kind of hard to visualize just by listening to it. So do you have some example that you can show off with baseline, maybe wrapping up, maybe bootstrapping a new project and adding some of these new endpoints to it? Yeah, for sure. All right. Let's do a live demo and see how good this goes.

And for folks who are listening on the Spotify or one of the podcast platforms, please do check out the link in the description below to the full YouTube video so you can see the demo, which will be a lot easier on the video compared to audio. All right.

Okay, so we've recently restructured Baseline with the open sourcing to make it really easy to get started. So we wanted to feel like create React app, except instead of just the front end, we wanted it to be the whole application. So all we have to do to get started is run this command, we'll call it demo, and this will basically get you the latest version of Baseline.

And with Baseline, we give you all of the code. So there's nothing hiding. There's no black boxes. It's all deployed into your own AWS account. So you can make the most of your AWS account and have full control over changing anything as well. So while we have an opinion about how stuff is set up, you can still go ahead and change any part of it. So we can now just open up VS Code. And here's our...

new baseline. So we have a few things we have to do once we get baseline. We'll just open up the readme and jump down to the setup. So we've gone through a couple steps. So using PMPM, we just install the packages and after that, just have a setup.

So this setup basically lets you add an app name and give it a region. So we'll just use the AP Southeast. And since we are...

We're often developing a lot of applications. We never want to use the AWS default profile. So Baseline has profile built in. So we've added an abstraction everywhere. So that's always using the app name for the profile. So you can switch between projects really easily. And because the serverless framework doesn't support SSO,

we made a little command to let us put it in and abstract that. You could still add the profile to the AWS command with configure, but we found this was a lot easier.

And since we've got this all set up now... So, sorry, does that mean that you have to set up the profile first? Or does this step then create a profile for you that you can use with the AWS CLI and everything else? That's a good question. So we're assuming we've already set that up. Unfortunately, setting up the IAM stuff is still not as seamless as I would like it to be. It's...

You can still just use an IAM role and use your admin credentials and that kind of thing. We use SSO internally. So since it has the third key, the session token, we paste that in essentially. Right. It's not as good as I would like it to be, but I'm hoping that we'll get some suggestions over time on how we could do this better because we'd love to see it be more robust and support a more seamless getting started process.

Okay, so in this case, you said that you always default to the app name. So that means that when I set it up with my AWS CLI, when I'm using a single sign-on, I need to make sure that the profile names I'm using is the same as the app name I'm going to be using in baseline. Yeah, yeah. And baseline, everything you do in it has already has that profile flag supported. Okay, got it.

And then, so now that we've, I've already done this, so it's already hiding there in the background, so I wouldn't have to show my keys to anyone. We can just do the deploy now. And so this kicks off a deploy for an API, a admin portal, and a website as well. So...

So this says that deploy staging. Does that mean that it's deploying to a serverless stage called staging? Yep. So the stacks are prefixed with that staging. So out of the box, we have the local environment as a stage and then as an environment and then staging as environment and production as an environment as well. Because revisiting that later in the application development process gets a bit sticky because

And it can be tricky to figure out where to put the names in later. So having it out of the box made things really easy because you automatically start thinking about which stage it is.

Okay. And I guess maybe it's also worth mentioning to anyone who hasn't seen this, is that this is all wrapped around serverless framework. So ultimately you have the server framework under the hood, but you're creating another layer on top of it so that it creates the basic bootstrapping around almost like a serverless framework template. But I guess more than that, you've also got additional things on top of that, like supporting single sign-on and all that.

Yeah. So one way to put it would be that we focused on elegance over abstraction and glued together the best technologies that we can find to solve these things at the moment and then made it really easy to change any part of it. Baseline

we give you all the code and it's only it's less than 6 000 lines of code i think so it's actually really easy to be able to adapt any part of it to something you're already familiar with or if you just want to start changing things it becomes a really easy job because it's really low level and close to the technologies that you're using rather than having

say a baseline abstraction layer it it's actually just the code um sitting there so it's very much like a template but we have a lot of things that we're adding on top to let you do a bit more than just a template um so this is deployed um i did cheat a little bit because i had the cloud front stuff already deployed so it wouldn't take uh eight minutes yeah eight minutes to do um

And part of this setup process was we wanted to keep the user out of the AWS console as much as possible. So we've tried to make it pretty simple to get started. And in doing that, if you're setting up an app for the first time, you generally need a user. So we've made it easy for developers to add users out of the box. So we'll just add myself to it. And this will add me to Cognito and DynamoDB.

I see. And I guess that by default that you're shipping with a cognito, you're shipping API gateway, you're shipping a dynamo DB table. And I guess the, are you using the, um,

on confirmation trigger with Lambda function to then save the user also in the Dynamo? That's a very good question. Actually, something that we spent a lot of time agonizing over. We originally had the triggers and we really worked hard on trying to make that work. We often found we had timing issues with the previous way we had it set up. So we actually cut back

to a completely different way of doing it where we use the user sub and we store that for admins. But for users, we just use the user sub and store against that. That's two separate things. I guess user sub is just what you use for the user ID.

But I guess it sounds like you have the user both in Cognito, but also in DynamoDB. The first thing you said about not using triggers. So at what point do you, during the user signup process, does the user get added to DynamoDB if you're not using the triggers? Well, we use the ID as essentially, you could consider it separate, its own object. So we use the user sub to reference it everywhere else.

We have created triggers previously if an application specifically needs it out of the box to attach things to that user or if we want to keep track of specific information for that user. However, it's been a bit of an issue with making things work effectively because...

Obviously, user subs specific to Cognito, so it's not ideal to rely on that all the time. And then storing in duplicate information risks it being out of sync. We kept trying to keep it up to sync, but we had issues with deployment with triggers where the triggers would sometimes detach

from Cognito and you'd actually have to go into the UI and change it. I'm not sure if you've ever faced this, but it came up. That particular problem

Sorry? I've had problems with triggers more from the circular reference point of view, so if I really need to reference from the trigger the Cognito user pool, then I've got a circular reference. But I've not had them suddenly disappear from Cognito. That one I've not seen. Yeah, it was a very strange problem where the trigger would just look like it was connected in the UI, but it just wouldn't run the Lambda.

So we've actually been this has been really successful the way we've been doing it lately. It's a lot simpler, but it isn't ideal for every single application. Some applications you want to store a direct user, have it keep in sync and all that kind of thing and use triggers where you just start creating objects where you have the user sub to reference what objects to get or that relationship.

That works really well because you might just store permissions against the user sub and then you can go get the permissions or the tasks associated and that kind of thing. It's been effective and it's a bit simpler than, say, having a duplicated user object, but it's been rather effective. I can dig a little bit deeper into that as we go, if you like.

- Yeah, sure. Maybe 'cause, well, I guess the bit I'm still not getting is if you, so are you saying that you don't have duplicate records in DynamoDB for the user? - For, yeah, so we have it for the admin user. - Okay, but not for the users that sign up on the web application itself. - Yeah, so you can't add that because we want Baseline to be flexible enough to develop into any kind of application. We didn't want to dictate everything.

So that's one of those parts of that is we don't know what you're going to build. It might be a CMS, it might be a marketplace or a SaaS. And that's where the user stuff starts to get interesting. Event-driven architectures is a powerful paradigm for building large-scale systems, but they're also notoriously difficult to test and observe and monitor in production.

With so many independent event publishers and subscribers, there are a lot of additional complexities around error handling, alerting and recovery, and making sure that you have full visibility into the entire lifecycle of an event so that you're able to troubleshoot any problems that you have encountered before.

I have built many event-driven architectures on AWS and these are just some of the recurring challenges that I face. And that's where HookDeck comes in. It's a fully serverless event gateway. There are no infrastructures for you to manage. You can get started in just a few minutes with their command line interface.

Compared to Amazon EventBridge, it does everything EventBridge does. It can ingest events from multiple sources, filter them, route them to different targets, and transform the event along the way, but it also offers a better developer experience, including a local development experience, and having more detailed metrics and logs to help you debug issues with delivering events, and just being able to query what events you have easily, which makes testing much simpler.

You can start the free trial of HookDeck today and support this podcast at the same time by going to hookdeck.com slash theburningmonk. Okay, okay, right. And I also guess the other question is why use the sub as a user ID and not have user ID separately? Because the subs doesn't work so well when you have...

I guess a single sign-on with a social sign-in and things like that, because every single platform you use will have a different Cognito user, they will have different sub, but if they're all referencing the same user, you kind of want to have your own user ID. So I guess the question there is why use the Cognito sub as a user ID as opposed to having a separate user ID? Yeah.

That's where it does start getting interesting because when you have multiple sign-ins, that's when you really want to start consolidating that kind of thing. Out of the box, we want to make it simple enough so that there wasn't heaps of abstractions for this kind of thing because we don't want to add code that's not going to be used essentially. So...

Not every application is going to use single sign-on. We wanted to have a very natural Cognito implementation. And the great thing about baseline is that we can add a base block later or a base stack where we add this functionality as well. So just because it's not in the core of what baseline is when you get it, it doesn't mean we can't add it into our ecosystem for people to start using things like that. So if that became a really popular way of doing things, we can inject that code later.

Okay, okay, sounds good. All right, please continue. Yeah, so after we've set all that up, we wanted a way to get the URLs for the project without having to dig into CloudFormation stacks in the console. So we just made it really easy to find them. We're using the default CloudFront URLs that you get given out of the box so that we don't have to specifically...

have domains configuration out of the box because it gets in the way of you know not everyone has a domain ready for their app so that's the the web portal and then we can just jump into the admin portal and since we've already created that user which was already signed in we can just log in

And so we can see that I have an admin, we can invite more users, delete, change the email, all that kind of stuff. So we've essentially got a fully working serverless first application running in your account out of the box.

So that's all great. So out of the box, I see there's like a baseline core architecture. Can we take a look at that just so we can see what's been created so far? You talk about there's the Cognito user pool, there's APIs. Also, there's also a web app S3 and app in app S3. Okay. Yeah. Okay.

so it's a pretty pretty standard implementation um there's not a lot going on there this is one lambda realistically two cloud front distributions and you're just using api gateway dynamo db um and we wanted to keep it pretty simple as well uh because we really do want you to be able to grow into anything um

It gets more interesting when you have something like a CMS. You can see sort of how things interact a lot more in a diagram like that. But yeah, this is the out of the box, very basic setup. Okay, gotcha. So now that we do have it set up, we can actually start running the project locally as well. So we've managed to put the environment variable management into baseline as well. So we can actually generate all the project

environment variables locally first and this just gets some the dot m set up and that kind of thing for the project and then we can start the api now the api will go and get done what's the where are the content of the dot m file coming from um it's a combination of uh stack outputs um

and then anything specifically configured for that project as well. So there's ways you can modify and add to the generation script. Okay. Which makes it really easy to manage, say, the backend and the frontend environment variables and grow those out over time. We found that things that should be simple, like how does the frontend know what the URL is for the API backend, weren't

really easy to answer without gluing some stuff together. I know some features have come up recently where you can make that easier, but having it work locally or deploy in that build process where you have to package it all in, we really wanted that

things like that in there, as well as just a consistent environment for all the developers working on the project. So if you join the project, you don't want to be copying a .m file from someone else or trying to figure out who has the latest one or what environment variables are missing or what should be really secret. We want it to be contained within the project and in your AWS account. And that's why we sort of leaned into this management for environment variables.

Okay, good. And you mentioned that you are able to bring in just ExpressJS or whatever, and then you cut them into, I guess, entity-centric functions that serves a group of endpoints that are specific to a particular entity. So can we look at some of the Lambda functions that you have out of the box?

Yeah, for sure. I'll just get this up and running and we'll create a whole new entity and we can see all that code that gets generated and where it lives. Okay. Yeah. This also has got a lot of similar things that I've seen that the Amplify team has tried to do with the Amplify Studio in terms of also building out of an admin console out of the box for you, make it easy for you to do that, etc.

But of course, one of the problems with Amplify is that it's a bit of a black box that is really hard to get out of. So you having access to all the code and being able to modify that, that is one of the nice things that I wanted for Amplify. But I haven't seen it yet, but I keep getting told that Amplify Gen 2 is going to make that a lot better.

Yeah. Yeah. We, we tried to amplify a few times hoping that it would be what baseline is. We just, we, it was never enough or the code that it generated was never how you wanted the code to be. Right. We found it was quite frustrating to work with. So otherwise we would probably be using it and baseline wouldn't exist to be honest. So yeah,

But having this run locally now, we can actually sign in. We have a user mocked in the API. We have seed data. So when you run it locally, you can have a consistent data set for the developers that work on the project stored in the repo itself. And it gets populated every time you restart the API. So if we just delete the record, if you start the API again, that record will be there again.

and you can essentially replicate the experience of what it's like deployed um so if we wanted to you know quickly ship a change uh we'll just uh say change this update it to demo app um we can see react hot reloading in action and we can actually just simply go pmpm run deploy

Now, obviously you'd normally use a CICD for this kind of thing, but you can do it from local as well. And using this command, we'll just group everything up

and try and deploy anything that has changed. It also includes the invalidation, so you don't have to worry about going into the console and invalidating. And it just makes that process of if you've added new code or new infrastructure, it all just goes out through this unified deployment command, which is really helpful. So that's deployed now. So if we go back to here, back in.

and we'll just refresh and we can see that change is shipped out straight away so it really increases that pace you can iterate up now we make it a bit more interesting and we can make a whole new entity so we'll do that now let's jump back here and that's through if we go pmpm add object

So we have these things called baseline commands and you can extend this out. The add object command is basically a template of code that you can generate that injects code into baseline. So we've made it create a whole new entity for this here. So we'll just make a task and we'll give it a new field called title. What's a task? As in like, think of a Jira task or something like an actual object for the

for the project. Or it could be anything like a book object or a user type object.

So we'll use... - Oh, that's just a name, not a type. Sorry, I thought that was a specific type. - That's all good. And we'll make it not required and we'll add like description field as well. And we'll add that. And so this will generate this code. So what we've got now is the types, the service code, the CDATA, the task function. So the YAML definition for the serverless, as well as the DynamoDB and then the API code.

and then we've got the shared type so we can share it between the front end and back end and then what's the difference between the service and the api the service and the api the service is uh implementing our service object um that lets us do the common operations or the business logic for that service object and then the api is the actual interface

to be able to use and do things to that object. So that's where you do things like checking if it's a valid user before you do it, or if the request is okay and things like that. So very express looking. So very familiar. We've got middleware in there already built in for things like is admin that checks for you if the user is in DynamoDB and lines up with the user sub.

and then a response mapper, and there's that use of the service there to create the task. So pretty simple, straightforward code. So you said that it's very express looking. Is it actually using the express API or is it... Okay. So we've used...

We've got a wrapper for creating the app with Cognito and specific settings set up to make sure it's optimized for running in Lambda. It's been really effective and speedy. The cold starts are amazing and the response times are very impressive, even with the DynamoDB hit and the authentication as well to the API.

And we wanted to make that whole process of setting this stuff up a little bit easier. So that's why create app exists. Okay. And so once the functions has been created and deployed, is it actually running express like you do with LambdaLiv?

Yeah, so Express will run. It's mostly for routing and being able to utilize things like the middleware and the request and response objects in a familiar way that's very accessible to most people who've worked with Express before. Okay. And when you say the code starts, this is fantastic. What sort of numbers do you get for code starts for your functions? So we recently just upgraded to Node 20 and...

the AWS SDK three and I think they got down to 400 milliseconds and complete. Yeah. Round time was about 700 milliseconds. So about 400 milliseconds for any duration and then the 700 milliseconds for the end to end. Okay. Which is a bit faster. I, I,

My personal barrier where I think it's acceptable is maximum of one second. Anything below one second, you can deal with pretty easily. Anything longer, you're starting to get into a dangerous territory, I think, for just a typical user application. Yeah, I think one second, that's fairly common for people to set the SLAs and things like that. And then after that, I think requests, after you've done the cold start about

50 milliseconds round trip. So in that case, do you, because you also have the admin dashboard as well. So that's just a single page application hosted out of S3 from the architecture diagram. Okay, there's no server side rendering out of the box.

Yep. So it's very much focused on building application rather than being for a replacement for Next.js or something like that. Right. Gotcha. You could still put Next.js in your baseline project. Okay.

We often drop in React Native to have a mobile component alongside of the admin portal and client portal. So we found that very effective to extend baseline. We never wanted it to just be like this. It's just an admin portal and that's all. And it's never going to be anything more. Now you can completely grow it into whatever the application needs to be. Add more packages and that kind of thing.

So now that we've actually got this added, this code here, we can add some seed data just to get started. See if Copilot kicks in. Hasn't yet. Task ID, we'll just keep it something simple. Title, description, bullet. All right, so we've got a few objects there now. We can actually just start the API back up again and we'll have that new entity in the API and it will just show up.

And we can see that it's running and we can see that the data is coming through. So how do we deploy this now? Well, it's the same method again. We can actually just run that deploy staging and that will ship out all that new backend code, which makes it a really easy process to get started with.

And then you can extend any of this code because since we inject the code, you can still modify any of it. It isn't just an abstraction where it's like we add a new task and you give it all this logic. We generate the code, you can modify it. You can even change the template itself since it's in baseline. So you can actually start modifying it to your own project if you have specific needs.

in every single API endpoint, you can add that and you can utilize those templates or make new templates for different parts of the application where you have to continually generate new code. Okay. And do you have some, I guess, maybe the next thing could be to see how the baseline blocks work so you can introduce some additional, I guess, add-ons. It sounds like it's something that you can just add on to your existing project.

Yeah, I don't have a demo for that today. But it is something we do often. We have the concept of base blocks and base stacks. So a base stack would be something like a React native base stack and you drop it in and then it's hooked into the API automatically, works with Cognito, has all the build pipelines and hooks into all the different scripts that exist here.

And then base blocks sit on top of that essentially. So you might have a base block for the API where you add, for example, a task and it can inject that code as well.

So we're looking to make it so that the community can build out the extensions to Baseline and make it so that you can modify that extension rather than focusing on packaging things into libraries that may or may not be maintained. Because it can be tricky to maintain things that you don't know how people are going to use it, which is one of the reasons why we made Baseline...

slimmed down essentially is so that we've made you a really nice box. We don't care what you do inside that box. You can make a mess or you make it really pretty, really structured. But we want to give you a nice box to start with and we don't know what you're going to change about it. So once you have the code, you can do whatever you like to it to your heart's content. Okay, got it.

And all of this is based around the Express, I guess. And so how easy can someone say, run the right tests? Because you showed how you can do the exploration locally, the running simulation locally, but how about, do you have some examples on how you write tests for this? Very good question. We purposely left tests out. I'm very aware it's an opinionated place.

I have a pragmatic view of you write tests for business logic these days rather than 100% coverage because there's so much that's caught in the tooling these days, like linting, prettying, building. Even just your IDE has all the tooling to give you squiggly red lines on the stuff that doesn't work automatically. And unit tests don't save you from some things that

are going to come up anyway unit tests are very important for business logic or things that have to 100 work that way regardless um so we've left it open we'll provide guides or potentially base blocks in the future that just drop testing in but we haven't made it something that's part of this simply because we know that there's so much you can do with it and

I'm really not a fan of everything that's been created. The testing has come a long way in TypeScript, but there's still some really strange stuff you have to do where it can be difficult to add it to the project, especially when you're working with the infrastructure as code stuff. We haven't figured out what the right glue is yet to make it a really great experience.

Okay, so you talk about there in terms of you want to test the business use case as opposed to just individual code blocks. You can use a unit test to test those individual service or API, I guess, module that you've got there. But oftentimes, you know, your business use case is not just CRUD. You know, you're going to be calling a few different things, pulling data from different places. And then that's how you actually serve a user request function.

So I guess in that case, would you still, as part of your recommendation, would you say that's more something that you want to do at the end-to-end test level where you call it the API as the front-end client would, and then you make sure that the whole API and including the Lambda functions and everything else is working correctly? Yeah, essentially, yeah. The end-to-end stuff with this setup is pretty powerful then because since everything's

abstracted away from you. It's hard to test it in a meaningful way without building heaps of new testing abstractions, which can introduce new strange problems. So I think the end-to-end tests are fantastic for that kind of thing when you're deploying to an AWS account. So even if you're just running them on a testing account, a testing staging environment, that's quite a good way to do it as well.

But, you know, the right tool for the right job is what I'm proposing realistically.

And also, I guess this is the focusing. I think you mentioned this earlier that you guys deviated from GraphQL and this is entirely focusing on the REST. And are there some limitations to some of the local simulation that you can run? I imagine, is this using the serverless API local, API gateway local to simulate the local API invocation? Okay. Yeah, yeah. So we found it was the most...

reliable way. We know it's not a perfect one-to-one experience, but we have tried to drop in different tooling such as SAM or anything that uses Docker and we've had some delays. We're still testing out different tooling out there. We're willing to change it if something becomes more popular or more useful in that experience of running the whole app. We've just found that

As soon as you drop Docker in, the response time for a hello world was like between one and three seconds for us, which if you got 100 endpoints in your application and you're loading it up like a normal application, that can be a real struggle. So emulating it was the best experience we found so far and supporting the service framework, having multiple files for the infrastructure, it just made a lot of sense.

Okay, and I guess what's the future you guys are planning for Baseline? Are you thinking of mostly, I guess, just keep adding base blocks so that you can add the customization to the project? And I guess, what's the business model behind this? Yeah, so we have a roadmap.

Since we're open sourcing, we're looking for external feedback and to see how people are using baseline is what's really going to be interesting about this. And we're going to develop it. We're still looking at what technologies are coming out from AWS or

or partners around different tooling. We know Solace Framework V4 is coming out and we have licensing changing around that. So we're seeing where that goes, whether we keep using Solace Framework or switch to something like SAM if it starts to support enough things. We...

I'm really interested in building the base block ecosystem and the base stack ecosystem because if the community can extend baseline, it'll save people from say forking baseline and then modifying it and we end up with a whole ecosystem of baselines instead. If we can make it a

a really robust place where people can add code and functionality and we can sort of be that glue that pulls a whole application together. That would be fantastic. I think one of the struggles with, say, going to serverless land right now is how do I make this work in my application? How do I start using this today in my application without having to figure out how that glue comes together?

we're going to be building some templates ourselves that we might potentially sell as ways to accelerate using baseline so you know like a full-blown marketplace that you can start off with instead or a full-blown mobile application and things like that we're also going to be doing consulting and services around baseline and

looking at building that marketplace out for base blocks as well. So realistically, we're testing to build that product market fit with developers and we're really trying to lower that barrier for entry for startups to be able to build apps. But we're also looking at

cloud modernization so maybe an organization hasn't used baseline before and they want to take that first step into the cloud we're hoping baseline can be a really low risk way of doing that so being able to start with baseline project and then start building and make progress and see results that's really where we're heading at this point and then open sourcing whatever things that are quite useful so we've already

created a DynamoDB library as well that you can drop into other projects. We built it because we needed a bit more performance out of DynamoDB and we had a way that we were using it and we did have it in baseline, but we wanted people to be able to get updates, so we've moved it out into its own repository. So anything else that makes sense

to split out, we'll probably be doing as well. And then longer term, we'd love to support additional clouds potentially, but that's going to be a fair way off, I think. We really want to just make that experience really good for getting started with serverless and knock down those hurdles, potentially making that IAM stuff a bit easier to get started with instead and that kind of thing.

Okay. Okay. Sounds ambitious. And I guess best of luck. And yes, Thomas, thank you so much for taking the time to talk to us today and for showing us a demo of Baseline. Looks very interesting. And I guess if people want to get started, where do they go? Where can they read about the Baseline and some of the, I guess, getting started guides and whatnot? Yeah.

Yeah. So baselinejs.com will put you in a good spot. You'll also find us on GitHub, baselinejs, and we'll have instructions there on how to get started. So yeah, and we're looking forward to the community and their response to what we're doing. Yeah. Or find me on LinkedIn and ask me questions.

Okay, sure. I'll put those links in the description below so that if people, because you mentioned that you guys are going to be doing some consulting work around this as well. So if someone wants to get started with Baseline and they're running into trouble, they can find you information from the description below as well. Awesome. Okay. Once again, thank you so much for taking the time to talk to us today. And yeah, best of luck with the future of Baseline. Thanks, Jan. Appreciate it. Take care, guys. See you next time. Okay. Bye-bye.

Thank you to HookDeck for supporting this episode. You can find out more about how HookDeck improves developer experience for building event-driven architectures. Go to hookdeck.com slash the burning monk.

So that's it for another episode of Real World Serverless. To access the show notes, please go to realworldserverless.com. If you want to learn how to build production-ready serverless applications, please check out my upcoming courses at productionreadyserverless.com. And I'll see you guys next time.