We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode pytest-cov : The pytest plugin for measuring coverage

pytest-cov : The pytest plugin for measuring coverage

2025/1/23
logo of podcast Test & Code

Test & Code

AI Deep Dive AI Chapters Transcript
People
主讲人
Topics
主讲人: 我主要介绍了 pytest-cov 插件和 Coverage.py 的结合使用,以及如何提高 Python 代码的测试覆盖率。Coverage.py 是一个用于测量代码覆盖率的工具,它可以监控程序执行,识别未执行的代码,并生成多种类型的报告,包括文本报告和 HTML 报告。pytest-cov 插件则简化了 Coverage.py 的使用流程,并提供了许多额外的功能,例如子进程支持、XDist 插件支持以及上下文支持。 在实际使用中,我通常先使用 Coverage.py 生成文本报告用于 CI 和本地快速检查,如果发现未覆盖的行较多,则会使用 HTML 报告进行更详细的分析。对于不需要覆盖的代码,我会在代码中添加 `pragma no cover` 注解或在配置文件中列出这些文件。我追求 100% 的代码覆盖率,但会先进行高层次系统测试,再根据实际情况决定是否需要测试未覆盖的代码,并对不需要测试的代码进行记录或排除。 pytest-cov 插件的优势在于它简化了覆盖率分析流程,避免了手动运行 `coverage run` 和 `coverage report` 命令。它还支持子进程和 XDist 插件,并能自动合并并行测试的报告。此外,它还允许设置 `--cov-fail-under` 参数,指定覆盖率下限,低于该下限则测试套件失败。 Coverage.py 的上下文支持功能可以显示每行代码是由哪些测试覆盖的,pytest-cov 插件简化了该功能的配置,并能在 HTML 报告中显示每行代码对应的测试。这在调试时非常有用。总而言之,pytest-cov 和 Coverage.py 的结合使用可以有效提高 Python 代码的测试覆盖率,并简化测试流程。

Deep Dive

Chapters
This chapter introduces Coverage.py, a tool for measuring code coverage in Python. It explains the importance of measuring coverage in both source and test code, highlighting the benefits of identifying missed lines, branches, and the configuration options available for customizing coverage reports. Different reporting options such as text-based and HTML reports are discussed.
  • Coverage.py measures code coverage of Python programs.
  • It identifies executed and unexecuted code parts.
  • Provides per-file and branch coverage reports.
  • Offers various reporting options (text, HTML) and configuration for exclusions.

Shownotes Transcript

Translations:
中文

Let's get started with PyTest Cove, the number one most downloaded PyTest plugin. Welcome to Test & Code. This episode is brought to you by HelloPytest, the new fastest way to learn PyTest, and by the Python Test community. Find out more at courses.pythontest.com. PyTest Cove is a PyTest plugin that helps produce coverage reports using coverage or coverage.py. So let's take a pause and quickly talk about what coverage.py is.

Coverage.py is what you get when you say pip install coverage. It is a tool for measuring code coverage of Python programs. It monitors your program, noting which parts of the code have been executed, then analyzes the source to identify code that could have been executed but was not. So why do you need coverage? If I want to make sure my test code exercises all of my source code, I can run coverage while running the test to see if I missed something.

Also, coverage reports a total coverage percent. But more importantly, it reports per file. And that can show you which lines of code are missed by line number.

And it can do branch coverage. So you can make sure that each branch decision possibility is hit. The configuration options are amazing also. So even if you're using a vendor-in-framework or library, which you don't care about coverage for, you can exclude those parts and just test your code. There are also a lot of reporting options.

For CI and quick local checks, I usually run the coverage with a text-based report. And even within that, like if I'm going to set it up with talks, I often run coverage just on the last version or the latest version only, not on all of the versions of Python. Of course, if my source code does some branches based on Python version, then I'll have to run it on multiple versions. And there are ways to combine coverage reports if you do have Python-like switches in your code.

If the report shows more than a few lines uncovered, then I generally don't try to figure it out from the command line report or the terminal report. I lean on the HTML report. So you can turn on the HTML report with just running coverage HTML. This works even if the initial report was generated with the pytest-cov plugin. The HTML report is so much easier to look at.

And you can see what's missing and what branches are missing, etc. really easily with the HTML report. Do you need 100% coverage? This always comes up when they talk about coverage. For me, yes, I want 100% coverage reports with an asterisk. What's the asterisk about? That's because I'm going to write high-level tests to do system tests through the API if I can. Then I look at coverage reports.

And then I decide, does the code that's not covered need to be tested? Can it be removed? Should I test the new code at a high level or put in subsystem or module or functional level tests in place? The point is I need to think about it.

And if the code doesn't need to be covered, then if it's not obvious, why not? Then I document that. And I'll put something like pragma no cover in inline or list the files that don't need to be covered in the configuration.

The point is that once my decision is made, I encode that so that future coverage reports just so show 100% or whatever. And I can believe that. I don't have to think, yeah, 95% is fine because there's some stuff that I know isn't tested, but that's okay. Don't do that. It's just confusing to keep straight. So just encode those decisions so that 100% means nothing.

100% of the code that I want to be covered is covered. Should you run test coverage over your test code also, not just your source code? Yes. Actually, I even brought this up on purpose because PyTest is often used to test non-Python things. You can use PyTest to test anything that you can access with Python, which is just about anything. In my day job, I use it to test RF communication test systems through their external API.

And a lot of people use PyTest to drive Playwright or Selenium and other frameworks for web apps and web API testing, even if the thing that they're testing isn't Python. And even in those cases, coverage is helpful to run over your test code. Why would you want to run it just on your test code? Because test code is code and it's weird.

why is it weird because test code consists of a lot of test functions and test classes and we don't really call these things ourselves pytest does so sometimes we do a copy paste modify of a test function to make a new test function but then we forget to change the name so one of those test functions is just not going to run it's going to be ignored

and we're not going to see it unless we run coverage. Or perhaps we put some logic in some test code and some of those paths in the logic are getting hit in some of the test suites or in the test suite, but not all of the switches. We don't want that. I assume you have switches and logic in your test code because you want all the paths going to be covered. So these things happen. And since running coverage on your test code is such an easy fix, why not just do it?

So how do you run it? To run coverage under test code directly, you can run it with coverage run dash dash source equals then whatever where your source code is like source slash tests or not slash source comma tests and it'll grab your source code and your test code and run coverage on that. And then dash m pytest. So altogether coverage run source equals source and tests dash m pytest and then whatever pytest flags you got.

This is slightly different behavior than just running pytest because it's kind of like running python-m pytest and that puts the current directory in the search path. If putting in the current directory in your search path is not a problem for you or you just don't know what I'm talking about with that, then don't worry about it. But there's no report yet. It just runs it.

And now you have to run coverage report for a terminal report or coverage HTML for the HTML reports. Or like me, you start with the terminal report and do the HTML HTML report if necessary. It's not too hard, but it is two steps. And that's partly why I like the plugin PyTestCov. Yep, we're getting back to the plugin we wanted to talk about in the first place. PyTestCov is great.

For one, instead of running coverage run dash m pytest and then coverage report, you can just run pytest and then pass it in whatever you want to cover, like dash dash cov equals source or and dash dash cov equals tests.

And it's really not that much less typing, but it feels like less typing, I think, to me, at least. And I can put those command line flags in the configuration file if I want. And actually, I'm probably using talks, so I'll probably have all of that in the file, the talks file anyway.

So why do I care about running the plugin and not just running coverage itself? I don't know. It just seems easier. And it's not just for convenience, though. PyTestCov plugin also brings a lot of other stuff. A few of the other things it does is it deals with subprocess support. So you can fork stuff in a subprocess and that gets covered also with no extra work from you.

Same goes with XDist. So PyTest XDist is another plugin that allows you to run tests in parallel. And if you do that with coverage, you have to combine the output and

And then PyTest, but PyTestCov just does that automatically. PyTestCov combines the reports correctly. So you just get the final report and it's all correct if you're using XTest. Another reason why people love this plugin. Also, PyTestCov doesn't add the current DER in the search path. So if you care about that, use the plugin. And there's a couple more reasons why I love it. You can set

a cov fail under flag, which says, like if you set it to 100 or whatever you want, but let's say I set it to 100. That means that any test suite that falls under 100% coverage will fail the suite. None of the tests will fail, but the suite will fail because it didn't hit the 100%. That's super awesome. And you can set that to 95 or whatever you want to do. So far, these might seem like minor improvements, but they're not. All of these extra things

build up and they save people time and headache and that's a decent enough reason why pytestcov is the number one downloaded plugin but then there's another thing context

Coverage.py recently-ish, I don't know, within the last year or two, added context support. What does that mean? It means that you can figure out for each line of code that's covered which test it came from. So it's a little, so that you can like, you know, run your, run pytest with just like some of your tests.

and maybe like one of your tests or class, and then see all the stuff it hits and make sure that it hits the source code that you think it's going to cover. Or you can run your whole suite and for each line of code, you can see how many tests are hitting it. It's kind of fun. But when you're debugging what's going wrong, often it can be really valuable. But it's a little fidgety to get working, but not with PyTestCov.

With this plugin, it's super easy to get this set up. And then when you're done, when you set it up, what you get is the HTML report will have a column on the right with little like for each line of code. You can see on there, you go over to the right and there's a little drop down that you can see all the tests that hit that line of code. It's super cool. I don't use it all the time, but when I need it, I really need it.

Another reason why PyTestCov is fantastic. So definitely check it out. There's also a tutorial on how to get the context thing set up. It's not that difficult, a short tutorial, but it's in the PyTestCov documentation. So I'll put a link to that in the show notes. So who do we thank for all of this? Covers.py is maintained by Ned Batchelter. So thank you, Ned. Awesome. He's been supporting it for a long time.

PyTestCov is maintained by Yonel Christian Mariesch. Now, I probably got that last name wrong, but I really want to thank Yonel, it's I-O-N-E-L, for putting his pronunciation hint in his about page somewhere. I think it's on his blog. Anyway, cool. PyTestCov and Coverage.py, love both of them. Also, PyTestCov is covered by other people, not just Yonel. There's

been maintained by lots of people have added to it over the years. It's part of the PyTest dev group. Anyway, links are in the show notes. Check this out. Thanks for listening.

Thank you for listening and thank you to everyone who has supported the show through purchases of the courses, both Hello PyTest, the new fastest way to learn PyTest, and the complete PyTest course, if you'd like to really become an expert at PyTest. Both are available at courses.pythontest.com and there you can also join the Python test community. That's all for now. Now go out and test something.