You are on page 1of 18

How to re-run failed tests and

maintain state between test runs


Usage
The plugin provides two command line options to rerun failures from the
last pytest invocation:

 --lf, --last-failed - to only re-run the failures.


 --ff, --failed-first - to run the failures first and then the rest of the tests.

For cleanup (usually not needed), a --cache-clear option allows to remove all cross-
session cache contents ahead of a test run.

Other plugins may access the config.cache object to set/get json encodable values
between pytest invocations.

Note
This plugin is enabled by default, but can be disabled if needed: see Deactivating /
unregistering a plugin by name (the internal name for this plugin is cacheprovider).

Rerunning only failures or failures first


First, let’s create 50 test invocation of which only 2 fail:

# content of test_50.py
import pytest

@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
pytest.fail("bad luck")

If you run this for the first time you will see two failures:

$ pytest -q
.................F.......F........................
[100%]
================================= FAILURES
=================================
_______________________________ test_num[17]
_______________________________

i = 17

@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck

test_50.py:7: Failed
_______________________________ test_num[25]
_______________________________

i = 25

@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck

test_50.py:7: Failed
========================= short test summary info
==========================
FAILED test_50.py::test_num[17] - Failed: bad luck
FAILED test_50.py::test_num[25] - Failed: bad luck
2 failed, 48 passed in 0.12s

If you then run it with --lf:

$ pytest --lf
=========================== test session starts
============================
platform linux -- Python 3.x.y, pytest-7.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 2 items
run-last-failure: rerun previous 2 failures

test_50.py FF
[100%]

================================= FAILURES
=================================
_______________________________ test_num[17]
_______________________________
i = 17

@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck

test_50.py:7: Failed
_______________________________ test_num[25]
_______________________________

i = 25

@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck

test_50.py:7: Failed
========================= short test summary info
==========================
FAILED test_50.py::test_num[17] - Failed: bad luck
FAILED test_50.py::test_num[25] - Failed: bad luck
============================ 2 failed in 0.12s
=============================

You have run only the two failing tests from the last run, while the 48 passing tests have
not been run (“deselected”).

Now, if you run with the --ff option, all tests will be run but the first previous failures
will be executed first (as can be seen from the series of FF and dots):

$ pytest --ff
=========================== test session starts
============================
platform linux -- Python 3.x.y, pytest-7.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 50 items
run-last-failure: rerun previous 2 failures first

test_50.py FF................................................
[100%]
================================= FAILURES
=================================
_______________________________ test_num[17]
_______________________________

i = 17

@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck

test_50.py:7: Failed
_______________________________ test_num[25]
_______________________________

i = 25

@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck

test_50.py:7: Failed
========================= short test summary info
==========================
FAILED test_50.py::test_num[17] - Failed: bad luck
FAILED test_50.py::test_num[25] - Failed: bad luck
======================= 2 failed, 48 passed in 0.12s
=======================

New --nf, --new-first options: run new tests first followed by the rest of the tests, in
both cases tests are also sorted by the file modified time, with more recent files coming
first.

Behavior when no tests failed in the last run


When no tests failed in the last run, or when no cached lastfailed data was
found, pytest can be configured either to run all of the tests or no tests, using the --
last-failed-no-failures option, which takes one of the following values:

pytest --last-failed --last-failed-no-failures all # run all tests


(default behavior)
pytest --last-failed --last-failed-no-failures none # run no tests and
exit

The new config.cache object


Plugins or conftest.py support code can get a cached value using the
pytest config object. Here is a basic example plugin which implements a fixture which
re-uses previously created state across pytest invocations:

# content of test_caching.py
import pytest
import time

def expensive_computation():
print("running expensive computation...")

@pytest.fixture
def mydata(request):
val = request.config.cache.get("example/value", None)
if val is None:
expensive_computation()
val = 42
request.config.cache.set("example/value", val)
return val

def test_function(mydata):
assert mydata == 23

If you run this command for the first time, you can see the print statement:

$ pytest -q
F
[100%]
================================= FAILURES
=================================
______________________________ test_function
_______________________________

mydata = 42

def test_function(mydata):
> assert mydata == 23
E assert 42 == 23
test_caching.py:20: AssertionError
-------------------------- Captured stdout setup
---------------------------
running expensive computation...
========================= short test summary info
==========================
FAILED test_caching.py::test_function - assert 42 == 23
1 failed in 0.12s

If you run it a second time, the value will be retrieved from the cache and nothing will be
printed:

$ pytest -q
F
[100%]
================================= FAILURES
=================================
______________________________ test_function
_______________________________

mydata = 42

def test_function(mydata):
> assert mydata == 23
E assert 42 == 23

test_caching.py:20: AssertionError
========================= short test summary info
==========================
FAILED test_caching.py::test_function - assert 42 == 23
1 failed in 0.12s

See the config.cache fixture for more details.

Inspecting Cache content


You can always peek at the content of the cache using the --cache-show command line
option:

$ pytest --cache-show
=========================== test session starts
============================
platform linux -- Python 3.x.y, pytest-7.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
cachedir: /home/sweet/project/.pytest_cache
--------------------------- cache values for '*'
---------------------------
cache/lastfailed contains:
{'test_caching.py::test_function': True}
cache/nodeids contains:
['test_caching.py::test_function']
cache/stepwise contains:
[]
example/value contains:
42

========================== no tests ran in 0.12s


===========================

--cache-show takes an optional argument to specify a glob pattern for filtering:

$ pytest --cache-show example/*


=========================== test session starts
============================
platform linux -- Python 3.x.y, pytest-7.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
cachedir: /home/sweet/project/.pytest_cache
----------------------- cache values for 'example/*'
-----------------------
example/value contains:
42

========================== no tests ran in 0.12s


===========================

Clearing Cache content


You can instruct pytest to clear all cache files and values by adding the --cache-
clear option like this:

pytest --cache-clear

This is recommended for invocations from Continuous Integration servers where


isolation and correctness is more important than speed.

Stepwise
As an alternative to --lf -x, especially for cases where you expect a large part of the test
suite will fail, --sw, --stepwise allows you to fix them one at a time. The test suite will
run until the first failure and then stop. At the next invocation, tests will continue from
the last failing test and then run until the next failing test. You may use the --stepwise-
skip option to ignore one failing test and stop the test execution on the second failing test
instead. This is useful if you get stuck on a failing test and just want to ignore it until
later. Providing --stepwise-skip will also enable --stepwise implicitly.

pytest-failed-screenshot: pytest plugin

For UI automation test cases using selenium and appium, screenshots are saved when they fail,
and are attached to the report when allure is used

Support helium, the webdriver process cannot be killed within a use case

install

pip install pytest-failed-screenshot

Usage

command line:pytest --screenshot={on:off} --screenshot_path={on:off:absolute path}



 options:
 screenshot: Used to open plugin, default “off”
 screenshot_path:
 off: The default is ‘off’.The screenshot will not be saved and will only be
attached to the allure report.
 on: The screenshots will be saved to the “./screenshot/%Y-%m-%d/”
directory in the root path of the project. If the directory has historical
screenshots, the historical screenshots will be archived, moved to the
“./screenshot/history/%Y-%m-%d/{times}” directory, and attached to the
allure report
 Absolute path: The screenshot will be saved in that path and attached to the
report.

Demo
The driver instances of selenium and appium must be transferred by a fixture.

import pytest

from selenium import webdriver

@pytest.fixture()

def init_driver():

driver = webdriver.Chrome()

yield driver

driver.close()

driver.quit()

def test_login_success(init_driver):

init_driver.get("https://github.com/fungaegis/pytest-failed-screenshot")

assert False
# helium demo

@pytest.fixture()

def init_helium():

yield None

kill_browser()

@pytest.mark.usefixtures("init_helium")

def test_helium_demo():

start_chrome("https://github.com/fungaegis/pytest-failed-screenshot")

# The webdriver process cannot be killed within a use case

assert False

command: pytest --screenshot=on --screenshot_path=on

tip: Support the use of pytest-xdist together

Event Listeners in Python selenium

Event listeners are a set of functions in a selenium python bindings that waits for an event to

occur; that event may be a user clicking or finding, changing the value, or an Exception. The
listeners are programmed to react to an input or signal.

There is a function to react when an event about to happen; also, there is a function to react after

the event.

In a Simple way : If person A tries to slap perform B, person B tries to escape the hit (before the

event); unfortunately, if A slaps person B. Then B reacts to it either by slapping or by keeping

mouth shut (after the event).

Even though selenium python provides listeners, it doesn't offer details on what to do when an

event occurs. The user must provide all the details of what should happen when an event occurs.

For Example, the User should implement a method for what should happen before and after a

click.

EventFiringWebDriver is a wrapper on the selenium python, and it provides all the methods

which the selenium provides.

EventFiringWebDriver provides two more functions to register and unregister the Listeners

implementing class.

Event Listeners present in Selenium Python

AbstractEventListener provides pre and post even listeners, so to use these listeners, we have to
implement those listeners by inheriting AbstractEventListener.

before_change_value_of :

before_change_value_of method will be invoked when we try to change the value of the
element; this method accepts the target element, driver, and the text

after_change_value_of :

after_change_value_of method will be invoked when the target element's values change; this
method accepts the target element, driver, and text.

before_click :

before_click method will be invoked before clicking and element with click() method in
selenium, this method accepts web element and driver as parameters
after_click :
This method will be invoked after the click() method's operation.

before_find :

before_find method will be invoked before finding the element with the findElement method,
this method accepts By class parameter and two web elements

after_find :

after_find method will be invoked after the operation of findElement

before_navigate_back :

before_navigate_back this method will be invoked before executing back() method from
Navigation class; this method accepts driver as a parameter

after_navigate_back :

after_navigate_back method will be executed after executing back() method

before_navigate_forward

before_navigate_forward will be invoked before executing forward() method from the


Navigation class, this method accepts the driver as a parameter

after_navigate_forward :

after_navigate_forward method will be invoked after executing the forward() method from the
Navigation class.

beforeNavigateRefresh :

beforeNavigateRefresh method will be executed before executing refresh() method from the
Navigation class, this method accepts driver as a parameter

after_navigate_refresh :

after_navigate_refresh method will be executed after refreshing the page with the refresh()
method from the Navigation class

before_navigate_to :

before_navigate_to method will be executed before navigating to nay webpage using to() method
from Navigation class, this method accepts String web address and driver as parameters

after_navigate_to : after_navigate_to method will be executed after executing to() method

before_execute_script:

before_execute_script method will be executed before executing any javascript code with

JavaScriptExecutor

after_execute_script :

after_execute_script method will be executed after executing javascript code.

on_exception :

on_exception method will be executed whenever there is an exception occurs in selenium,

irrespective of whether the user handles the exception or not, this method will be executed

Let's create an example program for Listeners in python.

import unittest
import logging
from selenium import webdriver
from selenium.webdriver import Firefox
from selenium.webdriver.support.events import EventFiringWebDriver,
AbstractEventListener

class MyListener(AbstractEventListener):
def before_navigate_to(self, url, driver):
print("Before navigate to %s" % url)
def after_navigate_to(self, url, driver):
print("After navigate to %s" % url)

class Test(unittest.TestCase):
def test_logging_file(self):

driver_plain = webdriver.Chrome(executable_path=r'D:PATHchromedriver.exe');
edriver = EventFiringWebDriver(driver_plain, MyListener())
edriver.get("https://google.com")

if __name__ == "__main__":
unittest.main()

Handle dropdown in selenium


Complete code for python selenium Listeners

import unittest
import logging
from selenium import webdriver
from selenium.webdriver import Firefox
from selenium.webdriver.support.events import EventFiringWebDriver,
AbstractEventListener

class MyListener(AbstractEventListener):
def before_navigate_to(self, url, driver):
print("Before navigating to ", url)

def after_navigate_to(self, url, driver):


print("After navigating to ", url)

def before_navigate_back(self, driver):


print("before navigating back ", driver.current_url)

def after_navigate_back(self, driver):


print("After navigating back ", driver.current_url)

def before_navigate_forward(self, driver):


print("before navigating forward ", driver.current_url)

def after_navigate_forward(self, driver):


print("After navigating forward ", driver.current_url)

def before_find(self, by, value, driver):


print("before find")

def after_find(self, by, value, driver):


print("after_find")

def before_click(self, element, driver):


print("before_click")

def after_click(self, element, driver):


print("after_click")

def before_change_value_of(self, element, driver):


print("before_change_value_of")

def after_change_value_of(self, element, driver):


print("after_change_value_of")

def before_execute_script(self, script, driver):


print("before_execute_script")

def after_execute_script(self, script, driver):


print("after_execute_script")

def before_close(self, driver):


print("tttt")
def after_close(self, driver):
print("before_close")

def before_quit(self, driver):


print("before_quit")

def after_quit(self, driver):


print("after_quit")

def on_exception(self, exception, driver):


print("on_exception")

class Test(unittest.TestCase):
def test_logging_file(self):

driver_plain = webdriver.Chrome(executable_path=r'D:PATHchromedriver.exe');
edriver = EventFiringWebDriver(driver_plain, MyListener())
edriver.get("https://google.com")
edriver.find_element_by_name("q").sendKeys("Sendkeys with listener");
edriver.find_element_by_xpath("//input[contains(@value,'Search')]").click();
edriver.close()

if __name__ == "__main__":
unittest.main()

*************************************************************************************
********
pytest-order - a pytest plugin to order test execution
pip install pytest-order

is a pytest plugin that allows you to customize the order in


pytest-order
which your tests are run. It uses the marker order that defines when a
specific test shall run, either by using an ordinal number, or by
specifying the relationship to other tests.

is a fork of pytest-ordering that provides additional features


pytest-order
like ordering relative to other tests.

works with Python 3.7 - 3.12, with pytest versions >= 5.0.0 for
pytest-order
all versions up to Python 3.9, and for pytest >= 6.2.4 for Python >=
3.10. pytest-order runs on Linux, macOS and Windows.

Documentation
Apart from this overview, the following information is available:

 usage documentation for the latest release


 usage documentation for the current main branch
 most examples shown in the documentation can also be
found in the repository
 the Release Notes with a list of changes in the latest
versions
 a list of open issues in the original project and their handling
in pytest-order

Features

pytest-order provides the following features:

 ordering of tests by index


 ordering of tests both from the start and from the end (via
negative index)
 ordering of tests relative to each other (via
the before and after marker attributes)
 session-, module- and class-scope ordering via the order-
scope option
 directory scope ordering via the order-scope-level option
 hierarchical module and class-level ordering via the order-
group-scope option
 ordering tests with pytest-dependency markers if using the order-
dependencies option, more information about pytest-
dependency compatibility here
 sparse ordering of tests via the sparse-ordering option
 usage of custom markers for ordering using the sparse-
ordering option

Overview

(adapted from the original project)

Have you ever wanted to easily run one of your tests before any others
run? Or run some tests last? Or run this one test before that other
test? Or make sure that this group of tests runs after this other group
of tests?

Now you can.

Install with:

pip install pytest-order

This defines the order marker that you can use in your code with
different attributes.

For example, this code:

import pytest

@pytest.mark.order(2)

def test_foo():

assert True

@pytest.mark.order(1)

def test_bar():

assert True

yields the output:

$ pytest test_foo.py -vv


============================= test session starts ==============================

platform darwin -- Python 3.7.1, pytest-5.4.3, py-1.8.1, pluggy-0.13.1 -- env/bin/python

plugins: order

collected 2 items

test_foo.py:7: test_bar PASSED

test_foo.py:3: test_foo PASSED

=========================== 2 passed in 0.01 seconds ===========================

Contributing

Contributions are very welcome. Tests can be run with tox, please
ensure the coverage at least stays the same before you submit a pull
request.

You might also like