We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode #436 Slow tests go last

#436 Slow tests go last

2025/6/16
logo of podcast Python Bytes

Python Bytes

AI Deep Dive AI Chapters Transcript
People
B
Brian
Python 开发者和播客主持人,专注于测试和软件开发教育。
M
Michael
帮助医生和高收入专业人士管理财务的金融教育者和播客主持人。
Topics
Brian: 我认为Python 3.14中自由线程不再是实验性的是一个令人鼓舞的消息,这意味着我们已经准备好支持它。虽然第二阶段还有很多工作要做,例如确保API ABI的兼容性、性能和内存护栏,但社区对自由线程的广泛采用充满信心。不过,成为默认设置至少还需要几年时间,因为稳定ABI支持的要求。我个人对自由线程的未来发展感到兴奋。 Michael: 我也非常期待自由线程的到来,它开启了很多有趣的机会。目前,如果不进行显式的多进程处理,Python代码只能利用10%的计算资源,而自由线程能够并行运行代码而无需序列化到多个进程,这非常棒。然而,人们在编写Python或其他语言代码时,往往没有充分考虑线程安全问题。我认为复杂性在于库,而不是应用程序,作为库的开发者,你不能决定你的库是否被用于多线程环境。我们不希望失去Python的易用性。

Deep Dive

Chapters
The Python community celebrates the removal of the "experimental" tag from free-threaded Python in 3.14. However, it's a multi-year journey to become the default, requiring API/ABI stability, documentation enhancements, and community adoption. The discussion covers thread safety considerations and potential challenges for library developers.
  • Free-threaded Python is no longer experimental in Python 3.14.
  • Becoming the default build will take several years.
  • Thread safety and documentation are crucial for broader adoption.

Shownotes Transcript

Translations:
中文

Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds. This is episode 436.

recorded June 16th, 2025. I'm Michael Kennedy. And I'm Brian Ocken. And this episode is brought to you by Propel Auth. We want to say thank you, thank you to Propel Auth for sponsoring the show. I'm going to tell you more about them later, but TLDR, if you have to do authentication in your

app. That can be a huge hassle. It's not the core business of your app. Give them a look. They'll solve that problem for you and let you go back to building whatever you're supposed to be building. Speaking of supposed to, you're supposed to be following us on some form of social, I would say. Don't you think, Brian? So we got our social media links out on the top of the show notes. So check that out there. We do our live streaming. If you want to be part of the show while we record it, we flip the switch and make it go live around 1030.

10 a.m. on Mondays, Pacific time, and all the older versions are there as well. And finally, if you want a really nice detailed email summary with extra information and background details on what we talk about, become a friend of the show, sign up for our mailing list. We're not here to spam you or to resell you for sure. Just to send you notes about things like what we covered in the show, maybe a very rare, uh,

announcement of like a new course or some event or something like that. But we'd appreciate if you signed up there.

As well, and Brian, I've always been impressed with the ability to do multi-threaded programming. I've enjoyed it on other languages and platforms where it was sort of full-featured, so that's why I was so excited when free-threaded Python came out. But it only partly came out, didn't it, in 3.13? Like, partially, or with a caveat. Yeah, so, let's see. Here we go. So, and

And what was that? Pep 703. So anyway, there was I can't remember the pep for when we had free threaded as an option that you could turn on. But then now there was an announcement. Exciting news. I saw this on on the socials on was on my stood on from Hugo von Kaminad.

Exciting news, pep 779, which was the criteria for supporting for supported status for free threaded Python has been accepted, which

which means free threaded Python is now a supported build. What that means is they will drop the experimental label for and so for for 3.14 beta three due on Tuesday, it will no longer be experimental. So that'll be really what that means is we're ready to kind of support it. Actually, I wasn't sure really exactly what this meant. So hop over to the Hugo linked to a discussion and

And I might have the wrong link there. So this discussion, here we go, was a

talking about the steering council approving 779, which was the criteria with the effect of removing experimental. And then there's a lot of details of what all this means for phase two. There's a lot of stuff that has to happen in phase two, which is like making sure that the, the API ABI compatibility requirements for experimental projects, some, some,

Performance and memory guard rails, we've talked about those before. I do like this, there's a section on what documentation needs to be there before we completely jump in with both feet. We need to make sure documentation is clearly written and maintained.

There's some high level concurrency primitives and some benchmark requirements that are needed. If you pop down to the end, it says we're confident that the project is on the right path and we appreciate the continued dedication from everyone working to make free threading ready for a broader adoption across the Python community. So there's a lot of work to do and I wasn't quite sure exactly how much work there is left to do. So, you know, I asked some people,

So we got a response of like, hey, does this mean that we're gonna be the default? And I knew the answer that it's not going to be for a while. But I wanted to have somebody more, more, more core than me answer that. And Thomas Wooters says, basically, it's going to be a few years before it's really the default. And really, we have how do we say default? It's, it's

It's going to be at least, it can't happen before 3.16 because of the stable ABI support requirement. And it may take longer. So, and really default is a squishy concept. So good answer from Thomas. Thanks. But this is encouraging. I'm excited to move forward with the free threading path. Yeah.

I'm as well. You got to go slow with this kind of thing because it's such a change, especially at the lower level. See API and integration like you're talking about. Yeah.

But I'm very excited for it. I think it opens up a lot of very interesting opportunities. Like right now, if I write Python code and I don't go do something explicit like multi-processing, I can get 10% of my computing resources, which is pretty darn low. So the ability to just say run this in parallel and actually get it to run normal without the constraints of

you know, serializing over to multiple processes is really cool. And that's kind of where some of the documentation needs are there. And maybe those are already there, but I just don't know where they are. But the thoughts of, okay, I'd like to have my project, like,

Like, let's say I want my project to be supported on support free threading. What does that mean? What do I need to look out for? I mean, I obviously need to check all my dependencies to make sure that they're tested on that. But what do I need to test? And things like that. Those are good.

Good things that document. Yeah. Yeah. I suspect if you're doing pure Python, it's pretty straightforward. There's always the concern that whatever you're doing needs to be thread safe, right? And I think people put Python aside in programming in general. They don't think enough about like,

like the thread safety aspects or even, even error handling sort of consistency type of stuff. Like I took three steps, there was an error, but the fourth step was required to put it back into a consistent state. I caught the error. It's fine. Like, no, it's not fine. It's really broken. So there's a lot of like situations like that. I think you might need to consider, you know, if you're doing multi-line Python things, you might need a lock statement. But,

but we'll see we'll see what shakes out we also really like python to be super easy for people to come on board i mean people are building you know web scrapers in 10 lines of code or something um yeah um and

we don't want to get rid of that easiness. So yeah. Yeah, I totally 100% agree with that. I do think that the complexity lies down in the libraries less than in your application, right? Because in your application, you can decide, well, I'm not doing threading, so problem solved. But as a library developer, you can't necessarily decide, not without just saying it doesn't work, you can't decide whether your library is being used in a multi-thread situation. So I think the simple use case of throwing together a script

So, Kishan out there says, "Do we need to use locks in Python now with the free-threaded version?" I think yes, maybe, but only if you're doing a multi-threaded code. But you might need that anyway before because of the multi-line consistencies, right? Even though every line might run on its own, like this block of five, you're not guaranteed that it's going to run as a block. Anyway.

way more details than we necessarily need to go into, but it's going to be interesting. And I imagine these conversations are coming back with this being the default thing. And if you want to try it out, UV makes it super simple. UV, create a virtual environment with like Python, just say VV, I think probably 3.13.5T would do it. Although I haven't tried, haven't.

I have not tried it. Well, yeah, but basically definitely 3.14T, hopefully. Yeah. Clearly we should have tried this before. So we'll get back to you on that one. Yeah, exactly. We'll figure it out. We'll figure it out. Speaking of figuring it out, what am I going to figure out here? So let's talk about, actually I have one other sort of async and threaded thing to talk about later. I mean, let's talk about that first. Let's change the order here. Look at that. Boom. We can do it live. So I want to talk about PyLeak.

P-Y-L-E-A-K. For like a memory leak, but instead of checking for memory leaks, what it looks for is async IO tasks, threads, and event loop leaks. Okay. Right? So if I run, if I call a function that's asynchronous, and I think it is synchronous, but I call it without like assigning it to a variable or awaiting or anything like that, that just creates the coroutine and the unexecuted coroutine just chills there until it gets cleaned up.

And it doesn't actually run that operation, right? Yeah. Not good. So that's what this library is about. It's about detecting those types of things. So let's go look at some examples. So you can do...

context managers, you can say async with no task leaks. And then if you somewhere within the execution of that context block, if you do that and you somehow call an async function, but you don't await it, like for example, here they say async IO dot create task given a sleep that will come up as an error. It can either be a warning or an exception. If you want it to like break the build and a test, you can say treat as

errors and it'll say, look, you called this function and you didn't wait for it. And tying this back to what you were just talking about, Brian, you can say with no thread leaks. And if you create a thread and start it, but you don't hang on to it as a variable, then

And it'll tell you like, hey, you're no longer in control of this thread. This could be a problem, right? So basically, I imagine it's probably the thread kept running on the other side of that context manager. I'm not entirely sure. You can also do that for event loop blocking, right? And the event loop is the async IO event loop, right? So this one's actually really interesting. It's if you're doing blocking work in an async IO event loop, that itself should be async IO event.

aware, right? If I'm calling...

an HTTP endpoint for some kind of external service, I should probably be using HTTPX's async client, not request get, which means basically it says like I'm doing an IO thing, but it's blocking because I'm not using the AIO native version of it, right? And so the example here is like if you call time.sleep when you're checking for no blocking, that's an error because you should be using AIO.sleep, which

you await and allows other work to happen at the same time, right? So basically it clogs up the entire AIO processing across all of the multi-concurrent processes.

Context. They're not threads, but like, right. The stuff that can run side by side, it blocks it. And so this will detect that. That's really cool. Yeah. This is a really neat library. There's a bunch more stuff. You can do this as decorators. You can get detailed stack traces. It'll show you like details about what has happened here. Like there's a leak task called task two on line nine of this example. And it shows you the code of what happened. And so...

and so on. Yeah, lots of different things going on here. I don't want to go into too much detail. I've kind of gone on and on about it. But it's not just a single thing. This is a pretty comprehensive library, and I like it. I like it a lot. I reached out to the rough folks and said it would be really great if you could detect when there's an async function that's called that wasn't awaited.

And they said, that really sounds great. We have or are considering it, but it's really hard to do. And so, you know, maybe you could just throw this in here as like one more thing and set the error version and then run PyTest and see what happens, you know? Yeah. So seeing, I'm just curious how you would use this. I would expect...

especially as you're building up an application, especially if it's maybe all the time, but maybe your first time doing an async application just to make sure that you're doing things right, putting some of these around, decorators around some of your methods within your code. Would you, once you have things production ready, would you take this stuff out or would you leave it in place just to...

I think I might put it in a unit test, but take it out of production. Okay. Probably what I would do. You know, the area where this really helps is it's helpful when you're creating a new project, but when you're creating a new project, you're in the mindset of I'm going to call a function. Oh, it's async. And you're actively working on that code unless you're vibe coding, then all bets are off. But if you're legitimately working on it,

Then you're like in the flow and your editor ideally gives you a little warning about those kinds of things. However, what this really helps is if you're converting from a synchronous situation to an async one. Like for example, when I converted the TalkPython code from synchronous pyramid to async court, which is basically async flask, there was a few places I messed up because there's so much code.

and there's all these function are called and you look at the code and it looks fine, but if you convert that function from sync to async, but you forget to find every place you're using it and add in a weight,

then that's when this happens. So does it, what happens in like the, so does it work anyway? It's just slower or what happens? So for the thread one, it may work anyway, but like the async one, the async task one, maybe not because if you don't await it, it doesn't actually execute.

Yeah. Right. You create the, the, the co-routine, but then it's the awaiting of it that actually makes it go. And so if you said like log this message asynchronously, but you don't await it, it's never going to log it. Right. Maybe if you call create task, it might start. I don't know. There's like some different ways in which it could be done, but. No, but this is, this is great to have some, I feel like this is a, like what scaffolding or training wheels or something to put on stuff just to make sure that things are running right. This is,

This is good. Yeah, it's definitely, definitely good. Cool. What else is good? Authentication that you don't have to deal with is good. Oh, yes, it is. So, uh,

On that note, we want to thank PropelAuth. So this episode is sponsored by PropelAuth. PropelAuth is the easiest way to turn authentication into your advantage. For B2B SaaS companies, great authentication is non-negotiable, but it can often be a hassle. With PropelAuth, it's more than just functional, it's powerful. PropelAuth comes with tools like managed UIs, enterprise SSO, robust user management features, and actionable insights. As your product grows,

PropelAuth adapts with it supporting more advanced authentication features. And the best part? PropelAuth has native support for major Python libraries like FastAPI, Flask, and Django. You can easily integrate them into your product.

Auth is effortless. Your team can focus on scaling, not troubleshooting. That means more releases, happier customers, and more growth for your business. Check them out to get started today. The link is in your podcast player's show notes. It's a clickable chapter URL as you're hearing this segment, and it's at the top of the episode page at pythonbytes.fm. Thank you to PropelAuth for supporting Python Bytes.

Yes, indeed. Thanks, PropelAuth. And I would like to point out, Brian, that all of the chapters are clickable. So I don't know if everyone even knows that most of our episodes have chapters. If they don't, that's usually because I forgot somehow, which is not the case. But every item on there...

for the chapters is also a link to the main resource of whatever. Like, for example, the PyLeak one will link to the PyLeak GitHub repo if you click it. And I don't like people to skip on us, but it's, I mean, I understand if we're talking about a topic that you really don't care about, that's one of the cool things about the chapter markers. You can just, you know, skip to the next topic or something. Yeah, or if you've heard it four times, you're like, yeah, this is the seventh time I've heard about this library. Like, you can skip it if you want. I mean, the show is pretty short, so, but still.

Nonetheless, nonetheless. All right. Let's talk about the thing that I was going to talk about first, but kicked it down the line, which is typed FFmpeg. Okay. So I don't know if folks know, but FFmpeg is a command line CLI video processing masterpiece. It is a beast of a library. And it's actually what I use for the TalkPython training courses to generate like all the

like all these different versions and resolutions and streaming styles and stuff. And you know, if we had, let's say a five hour course, probably turn FFmpeg loose on the videos for, I don't know, 15, 20 hours, something like that. And just let it grind on my Apple Silicon. And I've got a whole bunch of automation to make that happen, which is cool. It would be easier if this existed probably.

So this typed FFmpeg is a Python wrapper that supports working with filters and typing for FFmpeg. So pretty neat. And it's kind of like PyLeak. It's more comprehensive than you would imagine. So this one offers a modern Pythonic interface to FFmpeg, providing extensive support for complex filters with pictures. It's inspired by FFmpeg-Python, but this one enhances that functionality with PyLeak.

with autocomplete, comprehensive type information, JSON serialization, and so on. So if you look at the repo they show you, if you type ffmpeg.input. and then down comes a huge list of things with stunning documentation and type information. I mean, look at that, Brian. That's pretty nice there, right? Yeah, it really is. Yeah, I was really surprised. So it comes with zero dependencies.

comprehensive filter support, robust typing. You know, that's the point of it, basically. Graph visualization, which I was talking about, hinting at partial evaluation.

evaluation, media file analysis, and a bunch of things. So easy to use. It shows you how you can, you know, if you wanted to flip a video horizontally and then output it, you can see a little graph of input and then it applies all the operations. There's an H flip operation in the graph and then there's an output to the file. You get it more and more. I know this is like, you get even that interactive playground where you can drag and drop the filter bits together.

What? I know. I'm telling you, it's way more than you would expect. So yeah, really neat to visualize what's happening. And yeah, I don't do this kind of stuff where I'm like creating really complex graphs. It's more like format conversion, resolution conversion stuff that I use it for. But yeah, if you do a lot with FFmpeg and you do stuff with video, check this out. If you pay hundreds or thousands of dollars to cloud providers to like re-encode video for you, you definitely want to check this out.

Oh, okay. It might be saving a lot of money. I used to use AWS. They've got some video processing API sort of thing. And eventually, it was Cecil Phillip actually that convinced me I should just do FFmpeg. Yeah, AWS is probably just calling FFmpeg in the background. I'm sure that they are, which is crazy, right? Yeah. I mean, it's a little bit faster if they do it, but you know what?

I only do it once per course. It's not very often. Well, I guess if you're doing like your, all of your courses were like 15 hours. I get why you have to pay people to do that, but you know, whatever. Yeah. Cool. Yep. Yep. Yeah. If you use good caching, then life is much easier. You can just rerun it. If you add a new video to re-encode the whole thing. I love cache. I don't know. I do too. Over to you. I was going to talk about, what am I going to talk about? I'm going to talk about PyTest. Okay.

I kind of like PyTest, actually. You don't say. It's a fun article by Tim Kaminen. And this is short, so I almost put it as a second, like an extra, but it's just really cool. So his article is Optimizing Test Execution, and it's running live server tests last with PyTest. Okay, so this is about testing websites, using the live server fixture. And

And so if you're using that, we're using Playwright or Selenium, that's definitely, this is definitely for you. But also really, if you have the,

the, the techniques in this are, are, um, are really cool for even if you're just have, if you have any other, like if you have a test suite, that's, um, there's some slow parts and it's slow because of some fixture that you're using, like some external resource or whatever, any, any test that uses that as a little slower, um, you can use the same technique. So I just want to preface that. Um,

So why run slow tests last? Why does he want to run them last? Well, for efficiency, you get faster feedback for unit tests that allows you to, faster feedback for the fast tests. I don't know why he puts unit tests. It could be any fast test. Allows you to catch and fix issues easier, faster. You're not waiting for them. Also resource management keeps resources consumed by slow tests like database connections, external services and stuff.

not tied up through the whole test. So keeping those isolated to the end totally makes sense.

So how do we do this? Well, he's talking about installing PyTest Playwright, which also is a great plugin to drive web tests using PyTest. And PyTest Django. So this application is running a Django app. And then using, so his tests are using Live Server. So what does he do? He's adding a new marker, an E2E marker for end-to-end. But he's not actually marking anything with that manually.

manually, he comes by and uses one of the pytest lovely hook functions. And this one is collect modify items. And it's sort of an oddball. So it's good. It's good to have like some easy examples like that. What this does is it goes through all your tests and looks for all of them that are using the live server fixture. And then it does a couple things. He's adding the marker e2e. So adding the end end marker to all of the tests that use live server.

it really he was you could do live server marker you could do any marker name you want but why do we do that i'll get to that later um so he's adding the marker to the slower tests and then

And then he's splitting them up and running all the other tests first and then the live server test second. And that's really kind of the trick about PyTest collect modify items is the way to either you can bail on some tests or you can reorder them. And he's using the reorder. But since we're having to loop through all of them anyway, he's using that to add the marker. And then, so why do that? Well, again,

He's got a little example with a slow one or a fast one. But you can use that marker then and say, you know what, I'm debugging a unit test. I don't want the live servers and ones to run. So you can say, hey, don't run the end-to-end ones. You can say pytest-m not ed, and that will run all of your tests that are not using a live server. And that's a cool use of markers, auto-automatically applying markers. It's a cool thing. And then also for example,

For example, if you just want to run the live server ones, you can say E-M-E-D-E as well. So a really fun little example of how to automate pushing your slow test to the end and being able to select them or not select them. It's cool. I love this idea. I think I might adopt it.

That's nice. Also, gentle introduction to hook functions because hook functions can be a little scary in something simple like reordering your tests. It doesn't seem like it'd be simple, but it's only, what, 13 lines of code, including a comment, some blank lines. It's not bad. Yeah, that's not too bad at all. Okay. Yeah, I'm definitely going to look at this because I've got some tests that are blazing fast and some that are pretty slow for the various web apps I got. So, yeah, I'll check it out. I don't use Live Server or any of those things, but, you know, it's like,

I want to get the site map and then call a representative subset of things within there just to make sure it's all hanging together. That's definitely an E2E test. Well, and also like, so I see a lot of cases if somebody is using a database connection, they'll have like, or, you know, using, using a database to, uh,

even if it's just a mock database or a small one, but they've got a whole bunch of test data that they've filled in. And maybe it's not really slow, but it's slower than their other stuff. It's often accessed via a fixture and you can easily select the tests that use that fixture. It's pretty cool. The other thing I brought this up by because is...

I want to make sure everybody I mean, yes, I write about pytest a lot. But I like other people to write about it too. So please, if you've written a cool document, cool, some cool pytest documentation, send them to me.

Indeed. Looks good. All right. Let's jump over to some extras. All right. We have a new course at Talk Python Training, and this is the very first announcement for it. I haven't even got a chance to send out email about this, but Vincent Wormerdam, who's been on Python Bytes before, created a short...

LLM building blocks for Python course. And so this isn't like prompt engineering or anything like that. It's like, what are some good libraries you can use to build code that uses LLMs for various things? How do you get structured output? Like for example, how can you use Pydantic to communicate to the LLM how it should speak to you in a JSON response instead of a text response? So

stuff like that. Yeah. Super neat. So check the course out. It's just 19 bucks over at TalkPython training. Just go to courses or go to TalkPython.fm, click on courses. It'll be right there at the top of the, like the new courses list. So check that out. That's super exciting. Also over at TalkPython, I've done this thing called deep dives where it goes into a particular episode and you can look at it. It'll tell you like background on the guests, background,

background on important concepts you might want to learn to get a better sense of understanding what's going on or diving extra details into each of the things we've spoken about and so on. So the news is I have finished a long journey of getting one of those deep dive analysis for every TalkPython episode for the last 10 years. And the result is 600,000 words of analysis. If you were to go through and read them all, it's 4.5 million characters.

That's a lot of content. But that makes the search over there better because if you search, that now includes, basically the search engine considers the deep dive as part of the episode and looks for content.

content within there, not just within like what I put in the show notes and so on. So really, really cool. Super proud of that. That's, that was a lot of work, but it is now done. So I wrote a little article about that and I'll link to it. If you're more interested than what I just said. Nice. Also remember I had a rant. I even named last week's episode, stop putting your dot folders in my tilde dash or tilde slash or whatever.

Well, Eric Mesa said, hey, the places to store .files is defined by the XDG standard on Linux. Because remember, it's like I was whinging about doing this to my Mac OS setup. And Windows is even worse because the .files and folders are not even hidden, right? But what about Linux? Well, this XDG standard speaks to that. And so I even did a little, put together a little cheat sheet on it or whatever.

So put stuff in your, you know, where do the config files go? Well, they go in home slash, you know, like tilde, whatever your dollar home is, right? Basically tilde slash dot config. So maybe dot config slash my app, some settings or config.

There's a cache folder, and then you put it into the .cache in there. There's still a few .folders in your repo, but not one for every single application you happen to have run or something has run for you. So this is kind of cool, and people can check it out. There's a lot of details I've put together here. And even a way to use this XTG library, which is right here somewhere, and Python, how to use it.

So, or actually just a function you can use, but pretty cool. That's pretty cool. Any idea what X D G stands for? Zero. Yeah. I have zero idea. Okay. That's fine. We'll look it up for next time. I did look it up as part of like putting that little cheat sheet thing together, but then it was last week and I forgot. Yeah. That's me. Okay. Is that your extras? No, I got a couple more. I'll go quick. Okay.

Every time I think, you know, are you a fan of Scarface? You watched that when you were younger. I'm like one of the only people I'd ever have. Exactly.

This is a Godfather actually. This is Al Pacino though. So that's why I thought same, same actor. Every time I think I'm out, they pull me right back in. Right. Well, that's, that's me in open the pro version of open AI. I thought like, okay, I'm just going to go back to being a regular normal user of this thing. And then no, they go and release O3 pro. And so I'm like, ah, I got to have to pay the ridiculous money to try that out again, because it it's really worth it. Although I will say I'll take one for the team here. I will say O1 pro was incredible.

Incredible. And it's starting to be phased out. I don't know how much longer it'll last. O3 Pro does not seem nearly as good to me. Not even close. I don't know why. O3 is pretty good. So maybe I'm considering going back to a regular user again. But every time I'm out, Brian, pull me right back in.

Another one, this is dynamic. When I wrote it down, it was 17. Right now it's 20. But Python Bytes is the 20th most popular tech news podcast in the world. According to Good Pods. Okay. According to Good Pods, which is decent. And number one developer news show period. How about that? Specifically developer, not like also covers just tech or AI or whatever. Okay.

That's pretty cool. So thanks GoodPods for pointing that out. I used to use Chartable, but then Spotify bought them and shut them down. Thanks Spotify. I think it was Spotify. I'm pretty sure they were definitely bought and shut down. Okay. I want you, anyone out there listening, do not take action on this item until you hear the second follow-up item in this extra, because it's important.

On June 3rd, Python 3.13.4 was released. Hey, right? This is cool. Covered some CVEs that had to be fixed, so we quickly got things like tarball security issues. That could be really bad if you process tarballs from external input. So you might think, I want to go install that. No, no, no. Just a few days later, hey, hey.

3.13.5 is out because we had to quickly release this to fix an issue that was in 3.13.4. Okay. So make sure you get 3.13.5 if you have either 3.13.3 or 4 because 4 doesn't actually... I don't know what the actual issue was, but this one is like, oh my gosh, we got to fix it again. That's it. Those are my extras. All right. I only have a couple, but let's pop over. So...

Along the free threading topic, this was mentioned by what? John Hagan sent this in. And this is from the python.org discussion thread.

there's a there's a uh what a discussion thread called is free thrive is free threading our only option this is from eric snow whose um opinion i at least want to listen to so there's a um interesting discussion about really about whether or not free threading is the only way to go and he does uh mention he's not recommending to not support free free threading but there's other things to think

So I'm just going to drop this link. It's a kind of a long discussion, but it's an interesting read. Yeah, it's also noteworthy that Eric Snow did a ton of the work on sub-interpreters. Yeah, and that's part of it is talking around sub-interpreters. And some of the interesting discussions here, like one of the things that popped down that I thought from Antoine Pelletier,

Petru, Petru, and he's the maintainer of PyArrow, says, just as a data point, our library supports free-threaded Python, but I've not even looked at sub-interpreters. And I kind of, I know that it's going to be complicated to, or at least it might be complicated to think about free-threading, but thinking about sub-interpreters blows my brain up.

So I'm not thinking about them at all. What if you could have each thread have its own sub-interpreter? How about that? Or multiple sub-interpreters. I don't know. Or each sub-interpreters have its own multiple threads. Sure. Yes. Hence the brain blowing up. Yeah. Anyway.

And another free threading topic. This is from Hugo von Kaminad. Free threaded Python in GitHub Actions. This is just, he actually released this in March, but this

This is really how to, how to make sure. So we're encouraging people now to make sure with 3.14 and at the very least with 3.14 to, to test free threading for their project. So if you have a third part, if you, if you are main and maintainer of a third party, basically, if I can get your stuff via pip and and,

And it's got a library that I can use in other code. Please test it for free threading and then tell people whether or not you're supporting free threading. And this discussion is how to do that within GitHub Actions. So really great write up on how and it's basically just add a T to the it's not bad. So this isn't a lot of extra work.

Indeed. Not too much. All right. Well, you're ready for a joke? Yes. Close it out with an idea. So naming things is hard, right? There's the famous joke, which will introduce this joke, is that there's two things in computer science that are hard. Naming things, cache and validation, and off by one errors, right? Yeah. So this one comes from Programming Humor, and it's a bit of a meme. It has two things.

Two senior devs just like fighting, like, you know, literally wrestling, fighting away. So here's a code review meeting. The variable name should be number to be updated, says one of the senior devs while she's throwing down the other senior dev. The variable name should be updated number. Meanwhile, the junior dev is sitting there eating popcorn, watching it go down. While working on a new file with variable names such as AA1 and XYZ.

Yeah. And I'm over here. Do you guys have naming? Do you have naming debates? Sorry, go ahead. You're over there. No, we use linters to do the argument for us. But I'm looking at this going, it's a camel case. It needs to be a snake case. What's up with this? It's got to be like a JavaScript or a C-sharp argument. Yeah.

And I'm one of the worst. We've got to get in or take them both down. I'm one of the worst whenever I see style guides getting written, which I always cringe when there's a new style guide in the team. But I always make sure that it complies.

it's at least adds to and doesn't distract from actual like common practice in the rest of the industry. And the other thing is for the short variable names that you have to allow things like X, Y, Z for for and I, I and J for loop variables and stuff. Although I do agree that using both I and J is evil because some fonts, you can't really tell much of a difference between the two. So yeah.

Yeah. Yeah, but like for n in int. Yeah. That's like steeped in historical math style, outside of programming, right? Yeah. And I've had people like-- X and y for algebra, absolutely. And then I've had people gripe about using i as a variable for a loop. And I'm like, that's just so common. It's like for i in this, especially if it's not a nested loop. Why not? Yeah.

Well, have you done any C++? Come on. That's like one of the first things you do. For and I...

Yeah, exactly. Equals zero. I plus plus. I less than N. You index into the array because that's how it goes. Yeah. Anyway, well. So you don't invite me to your code review meeting because I'll be the grump in the background. Well, sometimes you do. Maybe you should. I know you don't like what they wrote, but they have a point. Let the I be. Let it be. Yeah.

All right. Well, thank you. Yeah. Thank you as always. And thanks everyone. Bye y'all. Bye.