Let's get started with PyTest Cove, the number one most downloaded PyTest plugin. Welcome to Test & Code. This episode is brought to you by HelloPytest, the new fastest way to learn PyTest, and by the Python Test community. Find out more at courses.pythontest.com. PyTest Cove is a PyTest plugin that helps produce coverage reports using coverage or coverage.py. So let's take a pause and quickly talk about what coverage.py is.
Coverage.py is what you get when you say pip install coverage. It is a tool for measuring code coverage of Python programs. It monitors your program, noting which parts of the code have been executed, then analyzes the source to identify code that could have been executed but was not. So why do you need coverage? If I want to make sure my test code exercises all of my source code, I can run coverage while running the test to see if I missed something.
Also, coverage reports a total coverage percent. But more importantly, it reports per file. And that can show you which lines of code are missed by line number.
And it can do branch coverage. So you can make sure that each branch decision possibility is hit. The configuration options are amazing also. So even if you're using a vendor-in-framework or library, which you don't care about coverage for, you can exclude those parts and just test your code. There are also a lot of reporting options.
For CI and quick local checks, I usually run the coverage with a text-based report. And even within that, like if I'm going to set it up with talks, I often run coverage just on the last version or the latest version only, not on all of the versions of Python. Of course, if my source code does some branches based on Python version, then I'll have to run it on multiple versions. And there are ways to combine coverage reports if you do have Python-like switches in your code.
If the report shows more than a few lines uncovered, then I generally don't try to figure it out from the command line report or the terminal report. I lean on the HTML report. So you can turn on the HTML report with just running coverage HTML. This works even if the initial report was generated with the pytest-cov plugin. The HTML report is so much easier to look at.
And you can see what's missing and what branches are missing, etc. really easily with the HTML report. Do you need 100% coverage? This always comes up when they talk about coverage. For me, yes, I want 100% coverage reports with an asterisk. What's the asterisk about? That's because I'm going to write high-level tests to do system tests through the API if I can. Then I look at coverage reports.
And then I decide, does the code that's not covered need to be tested? Can it be removed? Should I test the new code at a high level or put in subsystem or module or functional level tests in place? The point is I need to think about it.
And if the code doesn't need to be covered, then if it's not obvious, why not? Then I document that. And I'll put something like pragma no cover in inline or list the files that don't need to be covered in the configuration.
The point is that once my decision is made, I encode that so that future coverage reports just so show 100% or whatever. And I can believe that. I don't have to think, yeah, 95% is fine because there's some stuff that I know isn't tested, but that's okay. Don't do that. It's just confusing to keep straight. So just encode those decisions so that 100% means nothing.
100% of the code that I want to be covered is covered. Should you run test coverage over your test code also, not just your source code? Yes. Actually, I even brought this up on purpose because PyTest is often used to test non-Python things. You can use PyTest to test anything that you can access with Python, which is just about anything. In my day job, I use it to test RF communication test systems through their external API.
And a lot of people use PyTest to drive Playwright or Selenium and other frameworks for web apps and web API testing, even if the thing that they're testing isn't Python. And even in those cases, coverage is helpful to run over your test code. Why would you want to run it just on your test code? Because test code is code and it's weird.
why is it weird because test code consists of a lot of test functions and test classes and we don't really call these things ourselves pytest does so sometimes we do a copy paste modify of a test function to make a new test function but then we forget to change the name so one of those test functions is just not going to run it's going to be ignored
and we're not going to see it unless we run coverage. Or perhaps we put some logic in some test code and some of those paths in the logic are getting hit in some of the test suites or in the test suite, but not all of the switches. We don't want that. I assume you have switches and logic in your test code because you want all the paths going to be covered. So these things happen. And since running coverage on your test code is such an easy fix, why not just do it?
So how do you run it? To run coverage under test code directly, you can run it with coverage run dash dash source equals then whatever where your source code is like source slash tests or not slash source comma tests and it'll grab your source code and your test code and run coverage on that. And then dash m pytest. So altogether coverage run source equals source and tests dash m pytest and then whatever pytest flags you got.
This is slightly different behavior than just running pytest because it's kind of like running python-m pytest and that puts the current directory in the search path. If putting in the current directory in your search path is not a problem for you or you just don't know what I'm talking about with that, then don't worry about it. But there's no report yet. It just runs it.
And now you have to run coverage report for a terminal report or coverage HTML for the HTML reports. Or like me, you start with the terminal report and do the HTML HTML report if necessary. It's not too hard, but it is two steps. And that's partly why I like the plugin PyTestCov. Yep, we're getting back to the plugin we wanted to talk about in the first place. PyTestCov is great.
For one, instead of running coverage run dash m pytest and then coverage report, you can just run pytest and then pass it in whatever you want to cover, like dash dash cov equals source or and dash dash cov equals tests.
And it's really not that much less typing, but it feels like less typing, I think, to me, at least. And I can put those command line flags in the configuration file if I want. And actually, I'm probably using talks, so I'll probably have all of that in the file, the talks file anyway.
So why do I care about running the plugin and not just running coverage itself? I don't know. It just seems easier. And it's not just for convenience, though. PyTestCov plugin also brings a lot of other stuff. A few of the other things it does is it deals with subprocess support. So you can fork stuff in a subprocess and that gets covered also with no extra work from you.
Same goes with XDist. So PyTest XDist is another plugin that allows you to run tests in parallel. And if you do that with coverage, you have to combine the output and
And then PyTest, but PyTestCov just does that automatically. PyTestCov combines the reports correctly. So you just get the final report and it's all correct if you're using XTest. Another reason why people love this plugin. Also, PyTestCov doesn't add the current DER in the search path. So if you care about that, use the plugin. And there's a couple more reasons why I love it. You can set
a cov fail under flag, which says, like if you set it to 100 or whatever you want, but let's say I set it to 100. That means that any test suite that falls under 100% coverage will fail the suite. None of the tests will fail, but the suite will fail because it didn't hit the 100%. That's super awesome. And you can set that to 95 or whatever you want to do. So far, these might seem like minor improvements, but they're not. All of these extra things
build up and they save people time and headache and that's a decent enough reason why pytestcov is the number one downloaded plugin but then there's another thing context
Coverage.py recently-ish, I don't know, within the last year or two, added context support. What does that mean? It means that you can figure out for each line of code that's covered which test it came from. So it's a little, so that you can like, you know, run your, run pytest with just like some of your tests.
and maybe like one of your tests or class, and then see all the stuff it hits and make sure that it hits the source code that you think it's going to cover. Or you can run your whole suite and for each line of code, you can see how many tests are hitting it. It's kind of fun. But when you're debugging what's going wrong, often it can be really valuable. But it's a little fidgety to get working, but not with PyTestCov.
With this plugin, it's super easy to get this set up. And then when you're done, when you set it up, what you get is the HTML report will have a column on the right with little like for each line of code. You can see on there, you go over to the right and there's a little drop down that you can see all the tests that hit that line of code. It's super cool. I don't use it all the time, but when I need it, I really need it.
Another reason why PyTestCov is fantastic. So definitely check it out. There's also a tutorial on how to get the context thing set up. It's not that difficult, a short tutorial, but it's in the PyTestCov documentation. So I'll put a link to that in the show notes. So who do we thank for all of this? Covers.py is maintained by Ned Batchelter. So thank you, Ned. Awesome. He's been supporting it for a long time.
PyTestCov is maintained by Yonel Christian Mariesch. Now, I probably got that last name wrong, but I really want to thank Yonel, it's I-O-N-E-L, for putting his pronunciation hint in his about page somewhere. I think it's on his blog. Anyway, cool. PyTestCov and Coverage.py, love both of them. Also, PyTestCov is covered by other people, not just Yonel. There's
been maintained by lots of people have added to it over the years. It's part of the PyTest dev group. Anyway, links are in the show notes. Check this out. Thanks for listening.
Thank you for listening and thank you to everyone who has supported the show through purchases of the courses, both Hello PyTest, the new fastest way to learn PyTest, and the complete PyTest course, if you'd like to really become an expert at PyTest. Both are available at courses.pythontest.com and there you can also join the Python test community. That's all for now. Now go out and test something.