Pytest 使用介绍

发布时间: 更新时间: 总字数:3168 阅读时间:7m 作者: IP上海 分享 网址



  • Pytest 是用于测试 Python 应用lib库
  • 测试套件(test suites)是一个测试案例(Test cases)的集合,为测试执行目的而分组
  • pytest 支持自动在测试的目录中查找 test_*.py* 的文件,并加载测试用例
  • 命名规则
    • 测试文件:以 test_ 开头 或者 _test 结尾
    • 用例类测试文件中以 Test 开头的类\
    • 测试用例测试文件 中以 test 开头的方法或用例类方法
  • 判断
    • 使用 assert 关键字


# 安装
pip3 install pytest

# 查看版本
$ pytest --version
pytest 7.4.3


pytest --help ...
$ pytest --help
usage: pytest [options] [file_or_dir] [file_or_dir] [...]

positional arguments:

  -k EXPRESSION         Only run tests which match the given substring expression. An expression is a Python
                        evaluatable expression where all names are substring-matched against test names and their
                        parent classes. Example: -k 'test_method or test_other' matches all test functions and classes
                        whose name contains 'test_method' or 'test_other', while -k 'not test_method' matches those
                        that don't contain 'test_method' in their names. -k 'not test_method and not test_other' will
                        eliminate the matches. Additionally keywords are matched to classes and functions containing
                        extra names in their 'extra_keyword_matches' set, as well as functions which have names
                        assigned directly to them. The matching is case-insensitive.
  -m MARKEXPR           Only run tests matching given mark expression. For example: -m 'mark1 and not mark2'.
  --markers             show markers (builtin, plugin and per-project ones).
  -x, --exitfirst       Exit instantly on first error or failed test
  --fixtures, --funcargs
                        Show available fixtures, sorted by plugin appearance (fixtures with leading '_' are only shown
                        with '-v')
  --fixtures-per-test   Show fixtures per test
  --pdb                 Start the interactive Python debugger on errors or KeyboardInterrupt
                        Specify a custom interactive Python debugger for use with --pdb.For example:
  --trace               Immediately break when running each test
  --capture=method      Per-test capturing method: one of fd|sys|no|tee-sys
  -s                    Shortcut for --capture=no
  --runxfail            Report the results of xfail tests as if they were not marked
  --lf, --last-failed   Rerun only the tests that failed at the last run (or all if none failed)
  --ff, --failed-first  Run all tests, but run the last failures first. This may re-order tests and thus lead to
                        repeated fixture setup/teardown.
  --nf, --new-first     Run tests from new files first, then the rest of the tests sorted by file mtime
                        Show cache contents, don't perform collection or tests. Optional argument: glob (default:
  --cache-clear         Remove all cache contents at start of test run
  --lfnf={all,none}, --last-failed-no-failures={all,none}
                        With ``--lf``, determines whether to execute tests when there are no previously (known)
                        failures or when no cached ``lastfailed`` data was found. ``all`` (the default) runs the full
                        test suite again. ``none`` just emits a message about no known failures and exits
  --sw, --stepwise      Exit on test failure and continue from last failing test next time
  --sw-skip, --stepwise-skip
                        Ignore the first failing test but stop on the next failing test. Implicitly enables

  --durations=N         Show N slowest setup/test durations (N=0 for all)
  --durations-min=N     Minimal duration in seconds for inclusion in slowest list. Default: 0.005.
  -v, --verbose         Increase verbosity
  --no-header           Disable header
  --no-summary          Disable summary
  -q, --quiet           Decrease verbosity
  --verbosity=VERBOSE   Set verbosity. Default: 0.
  -r chars              Show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed,
                        (X)passed, (p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. (w)arnings are
                        enabled by default (see --disable-warnings), 'N' can be used to reset the list. (default:
  --disable-warnings, --disable-pytest-warnings
                        Disable warnings summary
  -l, --showlocals      Show locals in tracebacks (disabled by default)
  --no-showlocals       Hide locals in tracebacks (negate --showlocals passed through addopts)
  --tb=style            Traceback print mode (auto/long/short/line/native/no)
                        Controls how captured stdout/stderr/log is shown on failed tests. Default: all.
  --full-trace          Don't cut any tracebacks (default is to cut)
  --color=color         Color terminal output (yes/no/auto)
                        Whether code should be highlighted (only if --color is also enabled). Default: yes.
  --pastebin=mode       Send failed|all info to pastebin service
  --junit-xml=path      Create junit-xml style report file at given path
  --junit-prefix=str    Prepend prefix to classnames in junit-xml output

                        Set which warnings to report, see -W option of Python itself
  --maxfail=num         Exit after first num failures or errors
  --strict-config       Any warnings encountered while parsing the `pytest` section of the configuration file raise
  --strict-markers      Markers not registered in the `markers` section of the configuration file raise errors
  --strict              (Deprecated) alias to --strict-markers
  -c FILE, --config-file=FILE
                        Load configuration from `FILE` instead of trying to locate one of the implicit configuration
                        Force test execution even if collection errors occur
  --rootdir=ROOTDIR     Define root directory for tests. Can be relative path: 'root_dir', './root_dir',
                        'root_dir/another_dir/'; absolute path: '/home/user/root_dir'; path with variables:

  --collect-only, --co  Only collect tests, don't execute them
  --pyargs              Try to interpret all arguments as Python packages
  --ignore=path         Ignore path during collection (multi-allowed)
  --ignore-glob=path    Ignore path pattern during collection (multi-allowed)
                        Deselect item (via node id prefix) during collection (multi-allowed)
  --confcutdir=dir      Only load's relative to specified dir
  --noconftest          Don't load any files
  --keep-duplicates     Keep duplicate tests
                        Don't ignore tests in a local virtualenv directory
                        Prepend/append to sys.path when importing test modules and conftest files. Default: prepend.
  --doctest-modules     Run doctests in all .py modules
                        Choose another output format for diffs on doctest failure
  --doctest-glob=pat    Doctests file matching pattern, default: test*.txt
                        Ignore doctest ImportErrors
                        For a given doctest, continue to run after the first failure

test session debugging and configuration:
  --basetemp=dir        Base temporary directory for this test run. (Warning: this directory is removed if it exists.)
  -V, --version         Display pytest version and information about plugins. When given twice, also display
                        information about plugins.
  -h, --help            Show help message and configuration info
  -p name               Early-load given plugin module name or entry point (multi-allowed). To avoid loading of
                        plugins, use the `no:` prefix, e.g. `no:doctest`.
  --trace-config        Trace considerations of files
                        Store internal tracing debug information in this log file. This file is opened with 'w' and
                        truncated as a result, care advised. Default: pytestdebug.log.
  -o OVERRIDE_INI, --override-ini=OVERRIDE_INI
                        Override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`.
  --assert=MODE         Control assertion debugging tools.
                        'plain' performs no assertion debugging.
                        'rewrite' (the default) rewrites assert statements in test modules on import to provide assert
                        expression information.
  --setup-only          Only setup fixtures, do not execute tests
  --setup-show          Show setup of fixtures while executing tests
  --setup-plan          Show what fixtures and tests would be executed but don't execute anything

  --log-level=LEVEL     Level of messages to catch/display. Not set by default, so it depends on the root/parent log
                        handler's effective level, where it is "WARNING" by default.
                        Log format used by the logging module
                        Log date format used by the logging module
                        CLI logging level
                        Log format used by the logging module
                        Log date format used by the logging module
  --log-file=LOG_FILE   Path to a file when logging will be written to
                        Log file logging level
                        Log format used by the logging module
                        Log date format used by the logging module
                        Auto-indent multiline messages passed to the logging module. Accepts true|on, false|off or an
                        Disable a logger by name. Can be passed multiple times.

[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg|pyproject.toml file found:

  markers (linelist):   Markers for test functions
  empty_parameter_set_mark (string):
                        Default marker for empty parametersets
  norecursedirs (args): Directory patterns to avoid for recursion
  testpaths (args):     Directories to search for tests when no files or directories are given on the command line
  filterwarnings (linelist):
                        Each line specifies a pattern for warnings.filterwarnings. Processed after
  usefixtures (args):   List of default fixtures to be used with this project
  python_files (args):  Glob-style file patterns for Python test module discovery
  python_classes (args):
                        Prefixes or glob names for Python test class discovery
  python_functions (args):
                        Prefixes or glob names for Python test function and method discovery
  disable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool):
                        Disable string escape non-ASCII characters, might cause unwanted side effects(use at your own
  console_output_style (string):
                        Console output: "classic", or with additional progress information ("progress" (percentage) |
                        "count" | "progress-even-when-capture-no" (forces progress even when capture=no)
  xfail_strict (bool):  Default for the strict parameter of xfail markers when not given explicitly (default: False)
  tmp_path_retention_count (string):
                        How many sessions should we keep the `tmp_path` directories, according to
  tmp_path_retention_policy (string):
                        Controls which directories created by the `tmp_path` fixture are kept around, based on test
                        outcome. (all/failed/none)
  enable_assertion_pass_hook (bool):
                        Enables the pytest_assertion_pass hook. Make sure to delete any previously generated pyc cache
  junit_suite_name (string):
                        Test suite name for JUnit report
  junit_logging (string):
                        Write captured log messages to JUnit report: one of no|log|system-out|system-err|out-err|all
  junit_log_passing_tests (bool):
                        Capture log information for passing tests to JUnit report:
  junit_duration_report (string):
                        Duration time to report: one of total|call
  junit_family (string):
                        Emit XML for schema: one of legacy|xunit1|xunit2
  doctest_optionflags (args):
                        Option flags for doctests
  doctest_encoding (string):
                        Encoding used for doctest files
  cache_dir (string):   Cache directory path
  log_level (string):   Default value for --log-level
  log_format (string):  Default value for --log-format
  log_date_format (string):
                        Default value for --log-date-format
  log_cli (bool):       Enable log display during test run (also known as "live logging")
  log_cli_level (string):
                        Default value for --log-cli-level
  log_cli_format (string):
                        Default value for --log-cli-format
  log_cli_date_format (string):
                        Default value for --log-cli-date-format
  log_file (string):    Default value for --log-file
  log_file_level (string):
                        Default value for --log-file-level
  log_file_format (string):
                        Default value for --log-file-format
  log_file_date_format (string):
                        Default value for --log-file-date-format
  log_auto_indent (string):
                        Default value for --log-auto-indent
  pythonpath (paths):   Add paths to sys.path
  faulthandler_timeout (string):
                        Dump the traceback of all threads if a test takes more than TIMEOUT seconds to finish
  addopts (args):       Extra command line options
  minversion (string):  Minimally required pytest version
  required_plugins (args):
                        Plugins that must be present for pytest to run

Environment variables:
  PYTEST_ADDOPTS           Extra command line options
  PYTEST_PLUGINS           Comma-separated plugins to load during startup
  PYTEST_DISABLE_PLUGIN_AUTOLOAD Set to disable plugin auto-loading
  PYTEST_DEBUG             Set to enable debug tracing of pytest's internals

to see available markers type: pytest --markers
to see available fixtures type: pytest --fixtures
(shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option



pytest <test-dir>/




  • 指定测试文件的指定方法,:: 后跟函数名称
  • 使用表达式来运行需要的测试
pytest -k 'test_method or test_other'
  • Only run tests matching given mark expression
pytest -m 'mark1 and not mark2'


  • 收集有哪些测试用例
pytest --collect-only


  • 多进程使用 -n num 指定
    • -n auto CPU 的个数
    • -n num 指定测试的CPU进程数
    • pytest-parallel 用于 并行并发 测试的 pytest 插件,类似的还有 pytest-xdist
      • pip install pytest-parallel
      • 通过 --workers--tests-per-worker 配置进程和线程数
      • pytest --workers 2 --tests-per-worker 4
        • 2个进程并行,每个进程最多4个线程运行,总共最多8个线程运行


  • 显示具体的测试用例
    • -x 失败立即停止
    • --maxfail 指定最大失败次数
    • -s 等价于 --capture=no,关闭捕获输出
    • -l/--showlocals 失败时,输出局部变量
    • -v 输出详细的信息,与 -q 相反
pytest -xvs
  • 输出
    • pytest --capture=fd 默认,关闭输出
    • pytest -s 打开实时输出,关闭 Capture Log 输出
    • pytest --capture=sys 打开实时输出,Captrue Log 只捕获sys.outsys.err
    • pytest --capture=tee-syspytest -spytest --capture=sys 的组合


import pytest



# content of
def inc(x):
    return x + 1

def test_answer():
    assert inc(3) == 5
  • 运行测试,提示一个测试用例失败
    • pytest -v 显示详细错误信息
    • 测试结果中
      • F 表示 fail 失败
      • s 表示 stip 跳过
      • . 表示成功
pytest ...
$ pytest
================================================ test session starts =================================================
platform linux -- Python 3.8.10, pytest-7.4.3, pluggy-1.3.0
rootdir: /root
collected 1 item F                                                                                               [100%]

====================================================== FAILURES ======================================================
____________________________________________________ test_answer _____________________________________________________

    def test_answer():
>       assert inc(3) == 5
E       assert 4 == 5
E        +  where 4 = inc(3) AssertionError
============================================== short test summary info ===============================================
FAILED - assert 4 == 5
================================================= 1 failed in 0.02s ==================================================



import pytest

def test_eq1():
    assert (1) == (1)

def test_eq2():
    assert (1, 2) == (1, 2)

def test_join1():
    assert "hello world" == "hello " + "hello"

def test_join2():
    assert "hi str" == "hi str"
markers =
  • 调用
run test ...
$ pytest -m eq
================================================ test session starts =================================================
platform linux -- Python 3.8.10, pytest-7.4.3, pluggy-1.3.0
rootdir: /root/abc
configfile: pytest.ini
collected 4 items / 2 deselected / 2 selected ..                                                                                             [100%]

========================================== 2 passed, 2 deselected in 0.01s ===========================================
$ pytest -m join
================================================ test session starts =================================================
platform linux -- Python 3.8.10, pytest-7.4.3, pluggy-1.3.0
rootdir: /root/abc
configfile: pytest.ini
collected 4 items / 2 deselected / 2 selected F.                                                                                             [100%]

====================================================== FAILURES ======================================================
_____________________________________________________ test_join1 _____________________________________________________

    def test_join1():
>       assert "hello world" == "hello " + "hello"
E       AssertionError: assert 'hello world' == 'hello hello'
E         - hello hello
E         + hello world AssertionError
============================================== short test summary info ===============================================
FAILED - AssertionError: assert 'hello world' == 'hello hello'
===================================== 1 failed, 1 passed, 2 deselected in 0.02s ======================================


import pytest

def test_answer():


import pytest

@pytest.mark.parametrize("v1, v2", [('hi1', 'hi1'), ('hi2', 'hi2')])
def test_eq(v1, v2):
    assert v1 == v2
pytest ...
$ pytest
================================================ test session starts =================================================
platform linux -- Python 3.8.10, pytest-7.4.3, pluggy-1.3.0
rootdir: /root/abc
configfile: pytest.ini
collected 2 items ..                                                                                        [100%]

================================================= 2 passed in 0.01s ==================================================


  • fixture ​为测试提供可靠的、一致的上下文

  • 运行 pytest --fixtures

  • 支持 scope 作用域


import pytest

def data():
    return 'hi'

def test_hi(data):
    assert 'hi' == data
pytest ...
$ pytest
================================================ test session starts =================================================
platform linux -- Python 3.8.10, pytest-7.4.3, pluggy-1.3.0
rootdir: /root/abc
configfile: pytest.ini
collected 1 item .                                                                                              [100%]

================================================= 1 passed in 0.01s ==================================================



pytest 打印的日志带有颜色,日志显示带有如 31m 等字样, 原因是pytest内部有自己的logging color format设置(pytest --help 配置)

  • 方式一:可以通过设置环境变量关闭,参考
export PY_COLORS=0
# or
export NO_COLOR=1
  • 方式二:使用命令关闭 pytest --color=no
$ pytest --help
  --color=color         Color terminal output (yes/no/auto)

输出 html 报告

pytest-html 用于生成 HTML 报告的 pytest 插件,官方文档

  • 安装
pip install pytest-html
  • help
$ pytest --help | grep html
  --html=path           create html report file at given path.
                        create a self-contained html file containing all
                        the html report.
  • 测试结果输出到一个 html 文件,--self-contained-html 会合并 css 样式到单个 html 文件
pytest --self-contained-html --html=report.html

与 Allure 集成

  • Allure 是一款轻量级并且非常灵活的开源测试报告框架,特点
    • 它支持绝大多数测试框架,例如TestNG、Pytest、JUint等
    • 简单易用,易于集成
  • Allure 安装部署参考
apt install openjdk-8-jdk
dpkg -i allure_2.24.1-1_all.deb
  • 安装 allure-pytest 插件
pip install allure-pytest
  • help
$ pytest --help | grep allure
  --allure-ids=IDS_SET  Comma-separated list of IDs.
  --alluredir=DIR       Generate Allure report in the specified directory (may
  --clean-alluredir     Clean alluredir folder if it exists
  --allure-no-capture   Do not attach pytest captured logging/stdout/stderr
  • 生成报告,allure generate 参数
    • -o 生成报告的路径
    • -c|--clean 清理旧报告
# 1. 指定测试报告的生成路径
pytest --alluredir=./report/tmp

# 2. 生成测试报告
allure generate report/tmp/ -o report/html --clean

# 3. 打开报告,-h/-p指定ip和端口
allure open report/html/

# or 打开报告服务 web
allure serve report/html/
  • 配置 allure ENVIRONMENT 变量,./report/tmp/

即可以在报告中显示环境变量,也可以使用 environment.xml 配置

  • allure支持使用标记为报告添加更多内容,如 @allure.feature("xxx")/@allure.step(xxx)


Home Archives Categories Tags Statistics
本文总阅读量 次 本站总访问量 次 本站总访客数