We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode #105: The inception story of Cognito & secret to succeeding at AWS | ft. David Behroozi

#105: The inception story of Cognito & secret to succeeding at AWS | ft. David Behroozi

2024/7/26
logo of podcast Real World Serverless with theburningmonk

Real World Serverless with theburningmonk

AI Deep Dive AI Chapters Transcript
People
D
David Behroozi
Topics
David Behroozi: 我在亚马逊工作了15年以上,亲身经历了Amazon Cognito从无到有的过程,也参与了Amplify Hosting项目的开发。我离开亚马逊后,创立了自己的公司,致力于开发Speedrun项目,这是一个利用Markdown构建工具来加速手动任务的项目。在亚马逊的15年里,我深刻体会到处理问题的能力、学习经验以及掌握一些内部技巧对于在AWS取得成功的重要性。处理问题的能力体现在能够冷静地分析问题,收集必要的数据,并找到解决问题的方案。学习经验则体现在向经验丰富的同事学习,观察他们如何处理复杂问题。掌握内部技巧则包括了解AWS的招聘流程和一些不为人知的规则。此外,我分享了在开发Cognito过程中的一些经验,包括如何收集数据来论证新功能的必要性,以及如何设计一个能够支持各种身份验证方式的系统。在谈到ID token和access token的使用时,我表示两者都有其适用场景,并没有强烈的偏好。最后,我介绍了Speedrun项目,它能够将Markdown文档中的代码块转换为可执行的工具,从而提高开发效率并减少错误。Speedrun可以与AWS控制台集成,并支持各种AWS服务,例如Lambda、CloudWatch和Step Functions。 Jan: 作为主持人,我与David Behroozi就Cognito的起源故事、在AWS取得成功的秘诀以及Speedrun项目进行了深入的探讨。我了解到Cognito的诞生是为了解决移动应用开发者将AWS凭证嵌入应用的问题,并简化移动应用对AWS服务的访问。Cognito Identity和Cognito Sync服务的开发是为了提供专门为移动应用设计的AWS服务,并解决数据同步问题。Cognito的主要用例包括:直接从移动应用访问AWS服务;保护后端;以及与IoT设备集成。关于ID token和access token的使用,我与David Behroozi就其适用场景进行了讨论。此外,我还就Speedrun项目提出了疑问,并对该项目的功能和应用场景进行了深入了解。

Deep Dive

Chapters
David Behroozi, a 15-year Amazon veteran, shares the origin story of Amazon Cognito. Initially tasked with creating mobile-friendly AWS abstractions, the team realized the security risks of embedding credentials directly into apps. This led to the development of Cognito Identity and Cognito Sync, launched in July 2014, offering temporary credentials and data syncing capabilities.
  • Cognito was created to address the security risk of embedding AWS credentials in mobile apps.
  • Cognito Identity provides temporary credentials for direct AWS access.
  • Cognito Sync allows for data synchronization across devices.
  • The initial launch of Cognito included Cognito Identity and Cognito Sync.
  • Cognito User Pools, launched in 2016, added user management features.

Shownotes Transcript

Translations:
中文

This episode is supported by Memento, a serverless cache that you can trust and only charges you for what you use. To learn more, visit goldmemento.co/theburningmonk

Hi, welcome back to the end of the episode of Real World Serverless, a podcast where I speak with real-world practitioners and give their story behind the changes. Today, I've got a very interesting guest on the show. I've got David Beruzzi, who used to work at AWS behind the Cognito team, and now he's been working on his own projects for a little while. So, David, welcome to the show. Hi, Jan. Excited to be here.

Yeah, we've spoken quite a few times on the social media and shared a lot of ideas about various different things. So I'm really excited to get you on here to talk about some of the things you've been experimenting. But also, maybe you want to just start with a quick introduction, what you've been doing, your background and all of that.

Of course. So I guess the most interesting thing is I was at Amazon for over 15 years and on call the whole time. So I was a developer at Amazon, in the trenches, writing code. I started in Amazon retail. And then in about 2013, I moved to AWS.

where I wrote some of the first lines of code of Amazon Cognito. We built that whole service Cognito Identity, Cognito Sync, and later Cognito User Pools. And then around 2020 or so, I moved to Amplify Hosting. That's a full CICD product to host modern web apps on a CDN like CloudFront.

And I left in 2021 to kind of explore some of my wilder ideas that I've had in serverless. I have a company where I'm building a product called Speedrun. Exactly what it does changes from day to day, but how I describe it today is it's a clever way of using Markdown to build tools to accelerate your manual tasks. I hope we can talk about that a little bit today.

I recently become a AWS community builder in the serverless track. I've been doing some explorations in Lambda, edge functions and observability and blogging about it to increase my luck surface. I'm excited to talk to you today about Cognito and its lore and inception story and whatever questions you have for me.

Yeah, sounds good. And I didn't realize that you also worked on the Amplify hosting as well, which is another service I use a lot of all of my side projects, all the landing pages are hosted on the Amplify hosting, which I think, sadly, a lot of people don't realize they even exist. They think Amplify and they straightaway could jump to Amplify CLI. So every time I bring it up about,

Amplify Hosting and people just didn't realize that it's a thing. Recently, there was the CloudFront Hosting Kit that got launched. And I was asking just a question, okay, now this is a whole new set of toolkits to allow people to customize the CloudFront distribution, which you can't customize with Amplify Hosting. Wouldn't it be better to improve Amplify to give you some more controls around the CloudFront distribution itself?

And people just start straight away jumping onto, yeah, but Amplify, you can't customize anything, just all CLIs. But then it's like, okay, that's not the part of Amplify I'm talking about here. Yeah, before we talk about that, maybe let's just talk about the inception story behind Cognito, because you were there from the very start. Yeah, tell us behind the scenes how the service came about and what were some of the things that you saw?

Yeah, so I had just joined the AWS mobile team and the mobile team's charter was to build kind of abstractions on AWS web services catering to mobile app developers. So for example, we had something called the S3 Transfer Manager, which when you're in a spotty connectivity environment like mobile,

your connection is constantly dropping. And so being able to upload and resume your uploads to S3 was important. So we built certain SDKs to help with that, both

Android and iOS SDK. And we were working on things like making it so that you could do geo queries. I think back in that time, it was really hot to report your current location and see where your friends were. And so we built a library that allowed you to see who was nearby using geo queries on DynamoDB and things like that. And we were just trying to make it easier for mobile app developers to build on top of AWS.

And kind of one of the things that kept happening was people would embed their AWS credentials in their apps. And inevitably, at some point, somebody would find this out and decompile their app and start wreaking havoc in their AWS account. So the best practice was to build something called a token vending machine where your app would phone home to this

endpoints, it would say, here's who I am, do some sort of handshake, and it would get temporary scoped down credentials from the cloud, kind of what simple token service does. But that was kind of what we were promoting for instead of embedding your credentials in the app, do it this way instead.

So I think re:Invent 2013, what we were trying to launch was an SDK that enabled you to sync small amounts of user data to a DynamoDB database. At the time, people were just starting to

use multiple mobile devices. So they might have an iPhone and an iPad and a computer, and they wanted to sync all of their data between these various platforms so they could kind of pick up where they left off. And so before I joined, the team had built this SDK that allowed you to

There was a token vending machine where you got credentials and then you were from your app, you were basically syncing your saved games or your preferences against DynamoDB. And then there was this library that helped you sync it across all of your mobile devices. So if you saw it on your phone, then you could immediately see the change on your iPad, something like that.

I think we pitched this to leadership and they're like, "Hard now. This is not launching. This is way too scrappy and kind of not what we want to deliver. It's time we start building AWS services specifically for mobile." And so with that charter, we kind of

took the design back and we spent a couple of months putting together a web service to do this work so that's cognito identity which is gives you an identity and temporary time bound scope credentials to directly access

AWS Web Services and Cogito Sync, which was once you had an identity and wanted to sync small amounts of data against that identity across your web and mobile devices, you could use sync to that. So we spent a couple of months in design and I can remember, I think it was April and we'd been given a launch date. We were told we needed to launch for the Chicago Summit.

and we just got a little bit of prototyping at that point and i told my manager look it's go time we need to deliver this thing and we're running out of runway so it was kind of a a quick three-month um sprint to the finish where we um built these these web services from scratch um

kind of went through the entire checklist of building a AWS web service and launched them for the Chicago Summit July 10th of 2014. And the rest is history.

This launch schedule is for AWS services, so I imagine it must be quite challenging at times. I know every time re:Invent comes along, working with a lot of service teams, it's just, "Okay, we have all these ideas that we want to ship, but then there's always this looming deadline, this re:Invent, that a whole bunch of things are going to get squeezed and just push out around re:Invent." And I guess now there's also some stuff that gets launched at some of the summits as well.

One of the things I guess I do want to ask you about is... Okay, maybe let's get to that a bit later. Since you're designing Cognito, what has been some of the most common use cases

I mean, personally, I see mostly people use Cognito with something like API Gateway or AppSync, and that integration is probably the biggest reason that I use Cognito all the time. What about something that, from your perspective, working in a team, what were some of the most common use cases that people had for Cognito? Yeah, so there's a few use cases. Let me also touch on Cognito user pools. So...

Cognito User Pools was launched in 2016, and that was because Cognito Identity, you could either be an anonymous guest or you could federate in from another place. And we did have a feature called Developer Authenticated Identities, where you could authenticate against your own username and password database. But what people really wanted was for us to

kind of provide a turnkey solution for usernames and passwords and mfas and stuff like that so that's the user pools component of um kagito but back to your question of what are the common use cases so there's a few of them so there's enabling direct access from your mobile app to aws that's kagito identity so if you need to um

pump clickstream information into Kinesis or you need to give access to your end user to a portion of an S3 bucket. So just like a private store for their pictures or a portion of a DynamoDB table. So

just keys that are led with their identity ID so they could store key value pairs. So you don't need a backend at all. So direct access with the full power of AWS from your mobile app. So that's one use case.

Another use case is where you need to secure your backend. So you have a backend written with API Gateway and Lambda or some AppSync or something like that, that you need to authenticate your users and secure access to your backend. You don't want to leave it out in the open. You need to authenticate your users and use tokens to grant access.

There's a few use cases with IoT. So there's these kind of low power devices that don't have a lot of capabilities and they need to communicate with IoT endpoints, the AWS IoT service. We can bend credentials for that, allow them to do that when they come online. Kind of a lot of...

There's a couple of scenarios. There's business-to-consumer, that's a mobile app where you have end users. There's business-to-business, where you're building services for other businesses and you need to authenticate when they're... So there's lots of kind of use cases. I think there's an integration with ALB.

where you put ALB in front of your back end and then ALB is doing the authentication against Cognito user pools. So there's kind of lots of integrations with the different AWS services and people keep finding new ways to use it. But the core competency is it's the front door of your app.

You do a login with Cognito and then you get some tokens to secure your backend, however you build that.

Congratulations to Memento on the recent funding round and the serverless topics becoming generally available. Memento is a real-time data platform created by the engineers behind DynamoDB, where they've taken their years of experience operating one of the most popular and scalable services on AWS and created a set of powerful building blocks for cloud-native applications like Cache, Storage, and Pub/Sub.

helping developers serve a huge global audience safely and reliably. Memento can help you accelerate product development and growth, and helps you reduce risk and downtime by giving you these fully managed services that are redundant by design. And the best thing is, you only pay for what you use. Visit go-memento.co/theburningmonk for more information.

Yeah, the use case that you mentioned there was to allow the frontend to access the say, a DynamoDB table or S3 object directly without having an API. That's something that I actually saw Goiko Asic, you know, kind of did use that for his mind maps app. And he was talking to me about how, okay, yeah, if you're not using API gateway or Lambda to do anything beyond authentication, which

DynamoDB already does through IAM, then yeah, you could just get away with not having that API layer altogether and just have the front-end talk to the back-end directly. But outside of a very few specific cases, especially from like an expert like himself, because this feels like very much a,

high risk and high reward potentially approach to building applications. And obviously, if you get it wrong, that potentially can allow people to access data that they shouldn't be allowed to access.

But that's actually a really interesting use case, which I hadn't thought about until I spoke with him, until Goiko told me about it a few years ago. But the IoT one, I have seen that quite a few times. It feels like kind of the only, well, the main way that you want your IoT devices. I think with that, what's that thing called? I think with IoT services, one of the things you can do is to...

issue the certificates that you can get embedded in the IoT device, and they can use that to authenticate against the cognitive identity pool. So of all these features that you worked on, was there anything that stands out as your favorite? I mean, the one that I always kept pulling out of my back pocket was the custom authentication flow for

Cognito user pools. So it seemed like no matter what the use case, like that seemed to be some sort of solution to anybody. And I know that you've done a bunch of adventures and passwordless authentication and things like that. But kind of the idea behind that was every authentication is basically a flow of

here's what I have and then a bunch of challenges. So here's who I say I am and here's what I have and then can I get it? And they say, no, prove this or here's your next challenge. So maybe like a username and password or enter the one-time code we sent to you. So I think when we were in private beta,

we had a separate API for each of these different components of an authentication flow. So we had a separate API if it was an MFA, we had a separate API for username and password. And we saw that this was just going to turn into this massive surface area of APIs, which would be much harder to manage.

I think we took a step back and distilled this authentication flow into initiate auth and respond to auth challenge so that we could support any auth flow. But maybe somebody sends you a code on a postcard or something like that. This would support that. We wouldn't need to add a new API to support that. So I think...

That realization that we had early and the way that we built the flexibility is something that paid off a lot down the road with these ever-evolving authentication flows. And with CognitoSync, it's something that I've

Really not seen much what anyone used not for a long time Is that something that have you did you see anyone's kind of actually use it for anything? Beyond like a toy a project. So We definitely had users in the gaming community for save games we also had some users that were just trying to sync user preferences and

It was kind of deprecated in 2018 with the introduction of AppSync. So AppSync with GraphQL and things like that provided much richer use cases that really didn't exist at the time in 2014 when...

Cognito Sync was built. So we really started pushing people towards its successor, which is AppSync, which allows you to do much richer and all the things that you want to do for syncing. It's much more of a Firebase type competitor than Cognito Sync was. Right. Yeah. And AppSync is one of my favorite services as well.

And well, speaking or going back to, I guess, Cognito, they feel like it's been dormant for quite a number of years. But recently, they've actually announced quite a few updates and addressed a few of the problems that people have been complaining about for years. One of them was the fact that you couldn't customize the access token, so everyone had to use the ID token. Maybe, I guess, firstly,

Do you have any thoughts about, because I know a lot of people have read this blog post by OfZero about why you shouldn't use ID token. But when I spoke with a few of the guys from the 80% security team,

At least the feedback I got was that ID tokens are not evil, there are legitimate use cases for them, but for some reason they got demonized by that off-zero blog post. Do you have any thoughts yourself about when to use ID token or should you use ID token to talk to APIs at all versus using access tokens?

Yeah, so I think part of understanding this is to say that I'm not an auth guy. So I grew up as a backend developer and I learned about auth through reading the specs and things like that. So I'm not sure that I have really strong opinions one way or another, but

but my role was to make something that scaled and was secure and and things like that so um i think if you want strong opinions about when to use an id token versus an excess token you want to talk to somebody who's very excited about often security but um i have some opinions on it that um i mean

With an ID token, it kind of decentralizes a lot of the calls that you need to make to do authorization. So when you embed claims in the token itself about what group somebody is in or have profile information, that means that you don't need in your data plane to call out to some service to get that information.

which does make it lighter weight and more scalable. Whereas access tokens, you do need to call out. I guess I don't really have, I mean, I could go both ways. I know that API Gateway supports both ID tokens and access tokens for securing your backend. And I think,

like i feel like i'm not the the person to to corner here on when to use one over the other or the benefits i just know that that from my perspective i usually use id tokens they're good enough for me and i like the way that they scale um and i think that they're secure for my use cases

Yeah, I always feel comfortable with using ID tokens as well. But occasionally I get this pushback because of that off-zero blog post. But that's fine. I mean, they can contain PII. So if you're trying to hide some PII, but I mean, it should only be the end user who has them. And usually with an access token, you could use the access token to access the same profile.

I don't really have strong opinions. I could probably be convinced either way. Yeah, same here. I'm not an off guy either. And for that API thing, like I said, if you've got an asset token, you can call all these APIs. That's probably a lot more damage you can do with an asset token by calling some APIs to do very different things than just, oh, your name is Yen. We're going to do that information. Yeah.

And okay, so in that case, you have been at, you were at AWS for like you said, 15 years. That's quite a long time. Yeah, I have a show and tell here. Here's my purple badge. This means 15 years. Yeah, for folks who are not aware of that, when you join AWS, you get a first black badge, and then after five years, and then 10 years, and then you get different color badges.

Yeah, I can joke about that a little bit. So you start with a blue badge. I have some kind of fun names for the colors. So the blue badge is the color of the water that you're drowning in. And then the yellow badge at five years means that you've pissed yourself at least once. And then the red badge is red because it's covered in blood. Some of it's yours. Most of it's others.

And then the purple badge is, if you can't afford a wine cellar, you've done it wrong. So those are my tug-of-cheek ways of describing the badge colors. Right.

And I can't comment on any of the upper levels of badges because I haven't achieved them. Right. But I guess the point behind that is it takes a lot of courage to survive in an environment like AWS.

And especially when you're building all these large-scale systems, that's going to very easily piss off a lot of people if you just do one thing, something wrong. And I have heard stories about people who are otherwise brilliant engineers not able to thrive within AWS. And AWS is very big on these whole leadership principles. So what would you say is...

are the secret or the things that you must do to succeed in a large company like AWS? Yeah, so I think there's kind of several tricks to succeed at AWS. And I think if I was giving advice to somebody who was joining Amazon,

I think the most weight is put on your ability to just handle stuff. So you're given a project or you're given some mess to clean up and your ability to just holistically look at it in a calm way, understand exactly what's going on, understand what data you have, what data you need to get, whether you need to add a bunch of metrics,

um understanding in depth exactly what went wrong or what what needs to happen and then just kind of pragmatically thinking about it for a bit connecting all the dots and just making it go away like if you're given a project it's yours you own it and just doing the absolute best job that you can dotting all your

your eyes and crossing lts and doing stuff like that and the one of the best ways of learning how to handle things is to watch the elders so i mean they've seen everything and um that if you could if you could pair yourself with somebody who's been there for a while they know exactly like

It's different for each person, what exact leadership principles that you apply to each problem that you encounter. And just watching somebody who's been in the trenches for a while, it's kind of like if you watch somebody who's used Vim for a while or something like that, it's just magical almost how quickly they're able to kind of parkour through all of the... Just jumping around, it looks...

like it's comforting and chaotic and and somehow um they're able to do all this but i mean to to fully appreciate it you just kind of need to see this done a couple of times and realize that some of these impossible things are actually possible so i know that i learned a lot from the people who'd been there for a really long time and i i think that when you're coming in just

Watch how some of these things happen because there's so many cheat codes that you could use if you know them. I mean, I could give one example here of a cheat code that I learned kind of late where there's kind of a really intense interview process and the bar raiser is the final say. So if the bar raiser doesn't like the candidate,

and you can't convince the bar raiser that this is a good candidate, that that candidate doesn't get hired. I could remember one time I tried this, that the bar raiser was hung up about something about this candidate. Everybody on the team thought this was a wonderful candidate. They'd matched well with the team, they had great credentials, but the bar raiser was hung up.

I just kind of let the bar raiser talk. And at the very end of the debrief, I said, I think it would be a mistake to not hire this candidate. And the bar raiser kind of said, okay, does anybody else feel this way? And everybody else jumped in and said, yeah, this would be a total mistake not to hire this candidate. And then the bar raiser slowly...

came over to our side and like these are like really little things but these are not things that you think that you can like until you have the experience they're not things that you would even dare to try so um things like I mean you'll never go wrong with the data so if you're

if you're in kind of a sticky situation where you maybe something went wrong there was an operational event you bring all the data that you can um nobody's going to jump on you if you have the data to back up what happened and if you understand what happened and you know um

everything that went wrong and all the areas that you need to fix. So you kind of just need to come prepared into these situations. If you ever come unprepared or you overstep what you know, so you go into the realm of speculation during a discussion about an operational event, you'll immediately get pounced on. So if you don't know,

say you don't know and you'll find out. That's how you lower the temperature of the room. But if you say, well, maybe it was this, then all the smart people in the room will immediately tell you why it wasn't that. Definitely, another tactic that works extremely well there is to focus on three things.

So you always need a very razor focus, whether you're trying to solve some operational problem or something like that. You need three things that you're focusing on and you need to chip away at them a little bit every day and continue to show progress. And as long as you're

focused on something and you're you could always show what you're focused on and how you've made progress you'll always do much better than if you can't show that so those are a few things i think another thing that is a smart choice is if you get any amazon stock hold on to it

And in terms of getting data, for example, you mentioned earlier about pitching a new service. So something like a new service or a new feature for Cognito, for example, how would you collect the data? Because it's a feature that you don't have yet. You can't really prove that people are definitely going to use it. So how would you approach something like that?

Yeah. So for a feature, a new feature, it's kind of a combination of talking to the customers, seeing where their problem areas are. I mean, the customer will tell you what they want, but it's up to you to decide what they actually need. So they may describe what they want, which is different than what you should actually be building. So there's a lot of

the PM reaching out to the customers and doing interviews. And it's great as a developer to just talk to the customer as well, figure out exactly what their pain points. Oftentimes you already know what their pain points are, but sometimes you can get signals that way. And I think

The other thing is just to instrument everything you possibly can, kind of this notion of having really wide log events so that when you need to answer a question that is ad hoc, that there's some brand new thing that you need to answer, you could go to your logs and write a query that gives you

at least the first piece of the data that you need to answer your question so we definitely work backwards from the customers we try and we we try and use our data to back up some of these assumptions that we've we've made and then there's kind of an iterative

cycle usually that there's usually a private beta where we've invited a couple of customers to try it out and get feedback so that we make sure there's no clearing issues and then we go on to public beta and then general availability so there's definitely a lot of thought in the design phase where we

talk to principals about the design, see whether they've seen similar things and what are our pitfalls and does this sound right? So there's a lot of validation and we try and back it up as much as we can with actual data versus anecdotal customer comments or things like that. The checklist of things that you have to do to launch a feature continues to get longer. But...

It's still possible. I mean, there's no way that anybody could launch an AWS service in three months like we did back in 2014 with all of the necessary reviews and sign-offs and things like that. And you need to launch your CloudFormation support. You just can't launch without CloudFormation. So all these little things, CloudTrail, CloudFormation, that all takes time. They're requirements now. You can't do them after the fact.

I guess maybe one exception was that S3 thing, that S3 charges you for invalid or unauthorized requests. That got turned around pretty quickly, but I guess that was maybe a special case because it was a big public hoo-ha about that. So Jeff Barr and a lot of people just jumped on it straight away.

Yeah, so I mean security is job zero at Amazon and so like I would consider that falling in the high stakes like kind of security realm and then Everybody rallies around just getting that done. So that that's something that you've got the backing of a very

um smart and scrappy and clever company behind you and something like that you just get there's lots of examples of stuff like that happening where if it needs to be done fast it is done fast people will find a way so

And one more thing about your time at AWS, and this is something that you mentioned earlier, well, after recording, you mentioned this Welcome Back Buddy experience, which I've never heard about. What is it? So this is kind of a perfect world type description of things. So this is when I'm thinking about developer experience,

This is kind of how I think of things in the back of my mind, which is the welcome back buddy experience. So typically when you're interacting with computers, they have this bit of amnesia. So they don't really know who you are or what you're trying to do, and they don't really help you. But when you're going over to a friend's house, like they don't ask you what's your username, what's your password.

Things like that. They say, welcome back, buddy. Here's your favorite drink. And you just have a good time. And I think that when you're building a developer experience, it's your job to kind of

Collect all this context and use it so that you could give this welcoming experience so that that like for example if you're talking to a voice assistant or something and

If you say turn off the lights and it asks you like which room would you like to turn off the lights? But that's not something that would never happen if you're talking to another human. Right. But if if you're you're in the same room as a human and you say, hey, can you turn off the lights? Let's just turn off the lights. There's no back and forth. There's no formatting of inputs like.

oh, you said the room, but I didn't really understand it. Could you rephrase it in this different format or something like that? So kind of that's what I'm chasing, like for the ultimate developer experience. And I think that AI has made some pretty good progress on using all the context available to make this more effortless. And I think I tweeted a while back about that.

the goal of auth is to make authentication effortless for your end users and impossible for your adversaries and we're currently somewhere in the middle there that it's somewhat frustrating for you to log in right that um a human interaction for login like they know who you are and they just let you in um but if

it's not you, then they, I mean, they're not as secure as computers are and that you could probably sweet talk your way in. But I mean, I like one of the, one of the things that I was really frustrated at, at Amazon was that I would come back to my desk the next day and something would have happened in the night that instead of me being able to pick up where I left off,

like maybe somebody deployed something in some corner and something broke and so i'd have to spend a bunch of time trying to figure out like i i want to do x but i have to do all this warm up and i have to like okay what what what happened why can't i do this why who changed something and so the the the idea of the welcome back buddy experience is to make it so that like

It's like visiting a friend, but like you go straight into what matters and you're not kind of dehumanized entering all of these things that the computer should already know into it. So that's the concept. That's kind of one of the ideas that I'm playing with behind speed run is to make it so that it, it,

It does like when you're trying to do something, it actually helps you do it instead of get in your way and ask you for a bunch of stuff that you'd already know.

Okay. Do you have, I mean, conceptually, I understand that you want to not only personalize the experience, but also be aware of the context, the chat GPT is a way of the conversation that you just had. You don't have to repeat a bunch of things every single time. So do you have something that you can maybe quickly demo and show off of what you've been working on, how you're translating this idea into practice?

Yeah, so let me share my screen here and I'll show you a few things. Let me get this out of the way. You could see my screen right now? Yep, I can see it. Okay. Yeah, let me just kind of show you. So the idea here is that when you need to do something, you want it to be as effortless as possible. And

Every software development team has some documentation. And usually that documentation is in some state of bit rot, depending on when it was last touched and when it was last used and how pragmatic people are about keeping it up to date.

they might have errors in it, like things might have changed. And so the idea of speed run is to make it so that your docs actually help you do something so that they're always up to date instead of just telling you what to do. Because if it just tells you what to do, then people will not always follow the docs. They'll skip steps. They'll make mistakes, things like that. So the idea of speed run is to make it so that

you can put enough context into your docs so that you can do exactly what they say with a click and they're actually useful to you so that people keep them up to date versus they kind of atrophy over time. So what we're looking at here is just a normal GitHub wiki and speed run technology is running on top of it. It knows about

all of my accounts and regions that my service is running in. Here I'm running in two regions. I've got Oregon and Ohio here. The toolbar just gives you quick access. So if I want to go into my IAM account in the console, it will get me credentials and federate me into the console in the right place so that I can do stuff here. Looks like I

I've really locked down these particular accounts so that you can't mess around in my AWS accounts because this is a live demo that you could even run. But anyway, let's do a quick overview of the toolbar. So you've got your last GitHub issues that you were looking at. So you could quickly go back to those and...

like settings here you could configure all your settings for yourself what your personal account is things like that and so let's get into like what speedrun really does so speedrun allows you to just dump stuff into your documentation with a little bit of markdown so that

If you just run a CloudWatch query and now you want to make that CloudWatch query available to everybody on your team to run. So let's say that you have some operational tasks where you need to run this query, some ad hoc thing. You need to look up some customer's account, figure out something and follow up on something. The only code that you've put on your documentation, is this too small to see? I can make it a little bit bigger.

is that I want to run a CloudWatch insights query. I can see you fine. And here's my query that I want to run. So that's all I've put into my documentation. And what that gives you is this button that's going to take the context here. So in this service, in this region, and it's going to build the exact command and run it so that I'm now in US West 2 or Oregon with the right log group.

with the query that I need to run to get my answer. So I just click run query and I get my answer here. And it could take user inputs. So oftentimes when you're doing ad hoc things, it's not enough to have a canned query that you need like some information from the user. So here's an example that's similar.

to prompt for user inputs, you surround the prompt with a bunch of tildes. So here I want to prompt for the number of rocks collected. And this is kind of a more advanced user input where it's a type select. So it's a dropdown with these options in it. And I just embed that kind of markdown directly into my query. And so what that turns into when I click this,

is how many rocks collected do you want to see? So I have this and I get a nice user interface where I can type ahead, find things, I click OK. And so this is kind of a wrapper around the AWS console, which is sometimes a bit obtuse to use. That's now much easier to use. So now you can see it's built the query with 20 in it. So when I click this, I get my results.

Man, this feels like black magic. Yeah. So, and you could use it for whatever you want. Like you, so this is kind of serving as a wrapper around,

This is serving as a wrapper around the console. There's lots of things you can do. So you can, for example, you can invoke a Lambda directly from your documentation now. So if you have some sort of value added thing that you need to call out to a Lambda for, here's a quick example. So I want to invoke Lambda. Here's the name of my function. And then I want to give it this JSON. So I have a name.

I'm going to prompt you for your name and by default the name is going to be Samuel Jackson here. So when I click this, you can see here that it's defaulted the name to Samuel Jackson. I can run JavaScript code so I can do any kind of formatting that I want. Like maybe it's a date or I need to

translate from one format to another, whatever it is. But essentially, like now I have this UI, I enter this, I click OK. And it is called by Lambda here. You can see it's copied something in the clipboard. So if I go and I paste what it's written here.

"Hello, burning monk from Lambda and US West 2." And you can see here that I'm not worried about logging into the console or getting credentials or doing anything like that. I just say, "Look, I want to run this in US West 2. Here are my inputs. You do the rest." Right? And it kind of handles the rest for me. This one's a little bit more advanced. This one's invoking a Lambda function URL.

and i could actually show you a little behind the scenes here what's going on so this is how you run run javascript you could reference variables using this dollar and curly brace syntax so that means that i've defined lambda function url somewhere so let's go down here and i'll i'll show that to you here so this is the whole config behind this page

It's just a bit of JSON. So I've defined a bunch of stuff. So my role is the demo role. Here's all my services. I have this Dekacorn service and you can define stuff at different levels. So I've got, all right, here's the S3 bucket name and it's based on the region that I'm currently in. Here's the region. So in US West 2, this is,

Variable will be replaced by US West 2, things like that. But you can see I've defined the function URL here for this account so that whenever I'm using this, I don't need to keep putting in what the function URL is, that it just replaces. I go back up here. It replaces this with that based on the current context.

Okay, so it gets the region from the current context in the session that you're logged into right now. Yeah, if you look at the debug, you could see that it's been able to resolve the bucket name to US West 2. Like if I go ahead and I change this to US East 2, right, that this is now US East 2.

So this is kind of showing you some of the bug information of what it was able to resolve based on your current context. And you can override. So you can see here that I'm defining the function name right here. So if I wanted to change the role, I'd just say role equals my special role or whatever. And that would override it for just this particular block.

So you can define whatever it is at any level. You can translude your config on another page so that you're not repeating it everywhere. So there's some hierarchy there. This one, well, let me invoke this. So this one is kind of a silly demo, assuming that you have a warm nut dispenser.

So you go to the break room and you get some warm nuts. And here I can specify whether I want the nuts warm or cold. I click OK. It's going to call my Lambda endpoint. And then if I paste it here, and it knows who I am. So it's, hey, purple, that's my GitHub login. Here's some warm nuts from the region. And it's given me a bunch of nuts here.

And if I run it again, it will give me another set. And I'll do that again. But it remembers everything that I put in before. Okay, so here you can see that it's a little bit different when it's put back. And then here's kind of a fun one about invoking a step function. So this one...

There's a soup salesman and you have to be nice to him. There's a sitcom in the United States called Seinfeld. And there's a particular episode where there's this kind of irascible shop owner that you have to ask him nicely or he won't give you any soup. So this is, hey, yeah, if your hair looks nice, can I have a clam chowder? So this will kick off a step function with that input. So here...

Like it does some set of sentiments on what I've said using recognition, I believe, or comprehend, excuse me, and decides whether I've asked nicely or not. But if I go back and I say, your hair looks like crap, you could see here, no soup here, but it didn't, didn't like that. But again, like,

I basically put nothing on my page, but I get this very powerful thing and it's useful for building command lines. You can embed iframes so you could embed like code pens or YouTube videos into your content. Here it is spilling like a template.

It's running a little bit of JavaScript to get the current date. I don't know. Well, really what I'm trying to say is I built something similar at AWS to do operations, but there was always something going on. And this was the simplest way of me throwing something into a wiki and enabling my team to do exactly what I just did immediately.

So I could build a speed run block in a minute or two. And then everybody on my team immediately had this tool that would help them do something. And it was kind of a powerful catalyst for getting out of messes and doing operations. So this is kind of, I'm rebuilding it on top of GitHub and some of the other technologies that are available outside of Amazon. And I'm having a lot of fun doing it. So, yeah.

Any questions? Yeah, because this looks to me like if someone was to take the idea of, say, like a retool, but instead of building custom UI components, you basically do it in Markdown instead. Especially if all of your integration is just mostly with your AWS console. Yeah, so I mean, you can also invoke command lines. So let me show you how that works here.

DynamoDB. So like, for example, like you could open this in the console. That's probably how you want to interact with it. So this is finding the occurrences of a song lyric in a Dynamo table. But you could also run this from the command line. So you'd never want to hand this off to your PM and say, run this because they'll screw something up. But you could give them

Basically, I've taken the command that you'd run with the key and what it needs. So I just put copy here, you click OK. And if they've got permission to the role, so it wraps the command with the command to get credentials. And let me make this a little bit bigger here. So I have this command to call my endpoint to get credentials and set up your command line with those credentials. And then to...

invoke the AWS CLI to get your answer. So this is spitting out two, which is... So you can basically, like, if you're really good with one-liners and things like that, you could just kind of throw them into your documentation, and this will make them usable to all. It will wrap it with a UI and make it somewhat palatable to use to everybody else on your team. So...

And then you also don't need to worry about your, if you're doing a live stream, don't need to worry about your credentials getting leaked or anything like that, because this is all temporary credentials. You never see any credentials anywhere. So it's also fun for that use case too.

Cool. This looks really, really good. And I'm even thinking about using this for some of my workshops because oftentimes I find that people are struggling to follow instructions. Like you said, essentially a lot of my workshops contents are just instruction steps like a playbook. Sometimes people...

make mistakes in terms of what commands they run and things like that, which order they do them in. And this feels like, okay, this can make life a lot easier. I can have a step to say, once you've made these changes that deploy or hit this command to run a query and yeah, there's quite a few use cases I can think of besides the playbook and all of that, but even just for training.

Yeah, there's even one more fun use case. So I don't know if you use Identity Center. Let me, this is a use case that I created recently. But if you're using Identity Center, let's see, what do I have access to here? Let's just go to CloudWatch logs or something like that. So it gives you this little speed run icon in the console.

And what that does is it builds a deep link to this exact page. So if you're somewhere in the console and you want to share where you are in a particular account in a particular role, you just click this, this, and it will build the exact link that you could just share with somebody. And that's, then they don't need anything installed. They could just pick up exactly where you were

So this works with both my Speedrun credentials broker and with the Identity Center credentials broker. So it will automatically detect how you've logged in and give you that link. That's really cool. So if someone wants to go and try it out today, where do they go?

Is that something that is available right now that someone can just download and install the? Yeah, so it's open source. If you go to speedrun.cc, that will take you to my website and we'll kind of give you a bit of an overview of what it is and what it does. And I think really you mentioned maybe I could have this for my customers. So

I will say that at this point it's a little bit scrappy. So you need to install Tamper Monkey and you need to install this script in order for it to work. I'm still trying to figure out really where the product market fit is and whether I need to recreate this as a standalone script that will run on any website. So anywhere like

Like your website so that nobody needs to install anything or whether I need to make it as a React component or whatever. Because right now it's kind of piggybacking on GitHub for the wiki functionality or the markdown functionality. And you need to install this script in order for it to see these speedrun blocks and wire them all up.

But I'm looking to kind of make it in ways that you don't need to install anything, that it just works out of the box. So something to talk about with me if you're interested in doing that. Like when I was at AWS, it wasn't an issue to install a Tamper Bucky script, but I'm finding much...

much bigger headwind outside of AWS. Like what is this thing? Like I, it looks really cool, but I'm not particularly comfortable. So if you want to talk further, I'd be excited to see what your use case is. I definitely did use it when I was doing workshops or giving demos or something just so somebody could click, click, click and do something very quickly to get set up. But yeah,

Yeah, it is a little bit like there is a little bit of friction, I think, in getting started, which I'd love to get rid of.

Okay, I guess that's where you're going to have to just think about how to productionize and package it so that the developer experience of using the tool is nicer. But if it's a piggybacking off of GitHub Markdown, would it be possible to just piggyback, well, basically take that and just make it work for Bitbucket and other platforms?

get the providers if they're all marked down in some fashion? - Yeah, I mean, the tech just requires markdown, essentially. So it's looking for code blocks formatted in a certain way, and then it's able to wire everything up. So I could make this work on top of any markdown. I chose GitHub and AWS first because I thought that's where everybody would be, that everybody's pretty much using GitHub.

And it works on things like readmes and stuff too. So let me show you, let me just go to like the readme for my repo here. So this is my website. You could see here at the top that I could switch between my prod and data endpoints, and then it will build the exact commands I need to deploy.

So is this kind of a nice way of building tools right into your documentation so that it makes it somewhat useful to your end users? Yeah, this is pretty cool.

No more copying from the markdown and then running in the console or in the terminal, just running straight from there. Yeah, so it's kind of a layer. It's kind of, like you said, it's not quite retool. It's kind of this layer between ClickOps and IAC where it kind of tries to take the best of both worlds so that it's easy and it

provide some guardrails so that you're not shooting yourself in the foot all the time. So that's kind of where I am and where I'm trying to be is make it as easy as possible without having to learn much of a new language and to have to do deployments and rebuild all your stuff in somebody else's platform that might disappear or something like that. It's just Markdown is what I'm trying to do. Right.

Yeah, this, yeah, I didn't realize that that's what you've been working on. But this, yeah, this looks really interesting. I think, well, definitely, I'm going to be trying out for myself, because I've got a few things, I think, this might be quite useful, especially for some of my side projects, you know, where I just want to do deployment or wanted to try something and just, you know, use CloudWatch logs inside to query the some of the log messages and things like that, which, you know,

again, just a lot of clicks in the console to get to where I need to go. Whereas I can just, you know, put them into my markdown and my repo and that will take me straight there and run the query. Yeah, I kind of use it to keep breadcrumbs of everything that I've ever done so that if I need to go back to it, I could easily kind of replicate it. And just the whole fact that it remembers what you put in last time.

so that you're picking up where you left off. Like if you're iterating on something like, oh, I messed up that time, but the form is already filled out with what you put in last time so that you're not starting from scratch every time, I think really helps a lot.

Yeah, this looks very interesting. I guess when you're getting closer to productionizing this, making it a little bit more easy to install, we should probably do another maybe like a webinar session and do a proper demo of what that looks like once you get to that point. Yeah, well, I mean, I've had 100% uptime for the last year. Like it's kind of ready for production, but it's more for developers, I would say, at this point than it is

for end users of developers. So definitely if you're a developer and you're comfortable with what's going on, it's ready for you. It's been ready for you for a while, but I'm trying to figure out exactly how to package it so that it can also be used by your end users without any kind of install or anything like that. So I guess I use it every day. I've,

But I'm happy to say that I've had 100% uptime. There's no worries using it for your own development type stuff.

I guess I was thinking more, it's not so much for you. I think you mentioned this before as well, that you can let your engineering manager or someone else do this, but then to run the script, they still have to have the speed run installed. And for the developer, it might be okay to reinstall all these different things, but if you want your manager who's less technical or hands-on, it might be easier for them to have a way that can install a speed run easier without fiddling around with all of these scripts.

Yeah, right now it's a one-time setup, but yeah, it's, I mean, I really want to make it so that there's zero steps to get started, that you just go to the website and it works. Right. Okay. But this looks really exciting anyway. Thanks for showing me this. Yeah.

So, okay, we are a little bit over time already. And yeah, thanks so much, David, for coming over on the show and showing me the speed run and also giving us the backstory for Cognito. Before we go, is there anything else that you want to tell us, tell the audience, if they want to learn more about what you've been working on? And I'll leave the link to speed run down below as well. Anything else that you want to share before we go?

Yeah, I mean, so on the same site, I have a blog where I try and put interesting things that I'm finding in serverless if I don't have any paywalls or any expectations of using my product or anything like that. So I try and find some interesting things a little bit orthogonal to what I'm working on. I talk about cold starts. I talk about

uh using edge functions i talk about some interesting things here so there's some interesting content if you're interested in that um also i just want to thank you jan for giving me a chance to be on your podcast this is my first podcast and

I'm sure it's going to be the first of many. By the way, I have read quite a lot of your recent blog posts. I really enjoy them, the stuff you've been talking about regarding CloudFront functions and also function URL as well. So yeah, thank you for sharing all of these things. And I'm glad you didn't do that on Medium because Medium's paywall has been driving me nuts.

Yeah, so I mean some of these are they're also kind of 500 level like they're just kind of stream of consciousness what's in my mind so also I'm happy to help if anybody has questions about some of these wilder things that I've done explaining I know a lot of people have reached out to me on LinkedIn for follow-up questions and things like that so

i'm not trying to sell you anything i i'm trying to help and i really love the space that i'm in with aws and i hope that i can we can learn together and i can help you in whatever way i can so

Okay, I'll leave David's contact information down below as well in the description so that if you guys have any follow-up questions about the speed run or any of the things that David's been blogging about, you can go and ask him in person. Well, on social media at least. And yeah, so David, thank you so much again. I hope you're going to be at reInvent. I'm looking forward to seeing you in person.

Yep, I still need to get my ticket, but I have my hotel. So I'll be at GitHub Universe and I'll be at reInvent this year. I guess that will be my sixth time at reInvent. Right. Okay. In that case, I look forward to meeting you when the reInvent comes along. And thanks everyone for joining us on this episode. I hope to see you guys next time as well. Take care. All right. Cheers, Yan. Cheers, guys. Bye.

Thank you to Memento for supporting this episode. To learn more about their real-time data platform and how they can help you accelerate product development, go to gomemento.co/theburningmonk for more information.

So that's it for another episode of Real World Serverless. To access the show notes, please go to realworldserverless.com. If you want to learn how to build production-ready serverless applications, please check out my upcoming courses at productionreadyserverless.com. And I'll see you guys next time.