In part one, we covered the basics of pytest and wrote our first network tests. We tested BGP and OSPF on a single device, then extended it to multiple devices. We also looked at parametrization and how it helps treat each device and each neighbour as an independent test.
In this part, we will cover inventory management with Nornir and pytest fixtures.

Nornir Introduction
Nornir is a Python automation framework designed for network engineers. Instead of writing your own logic to connect to devices, manage inventory, and run tasks in parallel, Nornir handles all of that for you. We have a dedicated series on Nornir, which you can check out here, so we are not going to do a deep dive in this post.
The reason we are using Nornir here is for inventory and task management. Instead of hardcoding a list of IP addresses in our collection file, we define our devices in a hosts file with groups, credentials, and other attributes. This makes it much easier to manage a large number of devices and filter them by group when needed. In our case, we can run tests only against a subset of devices, without changing any of the test logic.
Please note that even though we are using a local inventory file here, Nornir also supports using external inventory sources like NetBox or Infrahub. This means your devices and groups can all be pulled dynamically from your source of truth instead of being maintained in a local file. We will cover this in an upcoming post.
Nornir also runs tasks in parallel, which means if you have hundreds of devices, it collects data from all of them at the same time instead of one by one. This makes a big difference when you are running tests at scale.
Building Up Gradually
One thing I want to be clear about before we go further is that the code we have written so far, or are going to write, is not the final or the best version. We are building this up step by step on purpose. Each iteration introduces a new concept and improves on the previous one, so you can follow along and understand why each change is being made rather than just being handed a finished solution that is hard to reason about. By the end of the series, the code will look quite different from where we started, and that is the point ๐
Refactoring with Nornir
We will now refactor what we have covered so far using Nornir for inventory management and task execution. Before we look at the code, a quick note on credentials. Storing usernames and passwords in plain text in a file is not ideal. For testing, it is fine, but in a real environment, you should be pulling credentials from a secrets manager like HashiCorp Vault or environment variables. We will cover this in a later post.
Directory Structure
We put all the Nornir files in a dedicated directory called nornir_files, which keeps things organised and separate from the test code. Here is what each file does.
nornir_files
โโโ config.yml
โโโ defaults.yml
โโโ groups.yml
โโโ hosts.yml
1 directory, 4 filesconfig.ymlis the main Nornir configuration file. It tells Nornir which inventory plugin to use, where to find the inventory files, how many workers to run in parallel, and whether to enable logging.hosts.ymldefines all your devices. Each device has a hostname, and you can assign it to one or more groups.groups.ymldefines the groups your hosts belong to. Here we have aneosgroup with the platform set, and acoregroup for devices we want to run core tests against.defaults.ymldefines default values that apply to all hosts unless overridden, in this case, the username and password.
#config.yml
---
inventory:
plugin: SimpleInventory
options:
host_file: 'nornir_files/hosts.yml'
group_file: 'nornir_files/groups.yml'
defaults_file: 'nornir_files/defaults.yml'
runner:
plugin: threaded
options:
num_workers: 5
logging:
enabled: false#hosts.yml
---
r1:
hostname: 192.168.200.101
groups:
- eos
- core
r2:
hostname: 192.168.200.102
groups:
- eos
- core
r3:
hostname: 192.168.200.103
groups:
- eos#groups.yml
---
eos:
platform: eos
core: {}#defaults.yml
---
username: admin
password: adminData Collection
The collection logic lives in a separate file called nornir_collect.py (outside of the nornir_files directory). In this file, we have two functions. The first, collect_eapi_data, manages the connection to the device using pyeapi and returns a node object.
#nornir_collect.py
import pyeapi
def collect_eapi_data(task):
connection = pyeapi.client.connect(
transport="https",
host=task.host.hostname,
username=task.host.username,
password=task.host.password,
port=443,
)
node = pyeapi.client.Node(connection)
return node
def collect_core_eos(task):
commands = ["show ip bgp summary", "show ip ospf neighbor"]
node = collect_eapi_data(task)
output = node.enable(commands)
task.host["bgp_summary"] = output[0]["result"]["vrfs"]["default"]["peers"]
task.host["ospf_neighbors"] = output[1]["result"]["vrfs"]["default"]["instList"]["1"]["ospfNeighborEntries"]
The second, collect_core_eos, is the actual Nornir task. It runs both show commands in a single call, then filters the output down and stores just the data we need directly on the host object. This means once the task runs, the BGP and OSPF data are available on each host for the rest of the test run.
In Nornir, every task function receives a task object as its first argument. This is passed automatically by Nornir when it runs the task, you do not need to pass it yourself. The task object gives you access to the current host being processed through task.host, which is how we get the hostname, username, and password to establish the connection. It is also how we store the results back on the host using task.host["bgp_summary"] and task.host["ospf_neighbors"].
Test File
The reason we structured it this way is to keep the data collection completely separate from the tests. Nornir manages connecting to the devices and running the commands, and the results are stored on each host object. The test file then just reads from those host objects and builds the parametrize lists.
The test file starts by initialising Nornir and filtering the inventory down to only the hosts that belong to both the eos and core groups. It then runs the collection task against those filtered hosts. All of this happens at the top of the file before any tests are defined, which is important because pytest needs the data to be ready before it can collect and parametrize the tests.
#test_net.py
import pytest
from nornir import InitNornir
from nornir.core.filter import F
from nornir_collect import collect_core_eos
nr = InitNornir(config_file="nornir_files/config.yml")
nr_filtered = nr.filter(F(groups__contains="eos") & F(groups__contains="core"))
results = nr_filtered.run(task=collect_core_eos)
bgp_peers = [
(host, peer_ip, peer_info)
for host in results
for peer_ip, peer_info in nr.inventory.hosts[host]["bgp_summary"].items()
]
ospf_neighbors = [
(host, neighbor)
for host in results
for neighbor in nr.inventory.hosts[host]["ospf_neighbors"]
]
@pytest.mark.parametrize(
"host,peer_ip,peer_info",
bgp_peers,
ids=[f"{host}-{peer_ip}" for host, peer_ip, _ in bgp_peers],
)
def test_bgp_peer_state_established(host, peer_ip, peer_info):
assert peer_info["peerState"] == "Established", (
f"{host}: BGP peer {peer_ip} is not Established, state: {peer_info['peerState']}"
)
@pytest.mark.parametrize(
"host,neighbor",
ospf_neighbors,
ids=[f"{host}-{n['routerId']}" for host, n in ospf_neighbors],
)
def test_ospf_adjacency_full(host, neighbor):
assert neighbor["adjacencyState"] == "full", (
f"{host}: OSPF neighbor {neighbor['routerId']} is not full"
)The two list comprehensions build the parametrize lists from the data stored on each host object. For BGP, we iterate over the hosts in the results and pull the peers dictionary that was stored during collection. For OSPF, we do the same but iterate over the list of neighbour entries instead.
The test functions themselves are clean and simple. Each one receives the relevant data as arguments, runs a single assert, and produces a clear failure message if something is wrong. All the complexity of connecting to devices, running commands, and parsing output is managed elsewhere, so the tests are easy to read and easy to add to.
pytest -v --tb=no test_net.py
=========================== test session starts ============================
collected 8 items
test_net.py::test_bgp_peer_state_established[r1-10.0.0.2] PASSED [ 12%]
test_net.py::test_bgp_peer_state_established[r1-10.0.0.3] PASSED [ 25%]
test_net.py::test_bgp_peer_state_established[r2-10.0.0.1] PASSED [ 37%]
test_net.py::test_bgp_peer_state_established[r2-10.0.0.3] PASSED [ 50%]
test_net.py::test_ospf_adjacency_full[r1-10.0.0.3] PASSED [ 62%]
test_net.py::test_ospf_adjacency_full[r1-10.0.0.2] PASSED [ 75%]
test_net.py::test_ospf_adjacency_full[r2-10.0.0.3] PASSED [ 87%]
test_net.py::test_ospf_adjacency_full[r2-10.0.0.1] PASSED [100%]
============================ 8 passed in 0.12s =============================In the future, if you want to add more devices to the test, assuming they are Arista and belong to the core group, you just need to add them to the hosts.yml file. No changes to the collection logic or the test file are needed. Nornir will pick up the new devices automatically and the tests will scale accordingly.
Splitting Tests into Separate Files
At the moment, both BGP and OSPF tests are in a single file, which works fine when you only have a few tests. But as the test suite grows, it quickly becomes hard to manage. Splitting tests into separate files by protocol or function makes things much easier to navigate and maintain.
I split the tests into two separate files, test_bgp.py and test_ospf.py, inside the new test_network directory. I also moved the collection logic into a new dedicated helper_functions directory to keep it separate from the tests. This way, the test files only contain tests, and anything related to connecting to devices and collecting data lives in one place.
.
โโโ conftest.py
โโโ helper_functions
โ โโโ nornir_collect.py
โโโ nornir_files
โ โโโ config.yml
โ โโโ defaults.yml
โ โโโ groups.yml
โ โโโ hosts.yml
โโโ test_network
โโโ test_bgp.py
โโโ test_ospf.py
You will also notice an empty conftest.py at the root of the project. When pytest finds a conftest.py file, it automatically adds that directory to the Python path. This means both test_bgp.py and test_ospf.py can import from helper_functions without any issues. Without it, Python would not know where to look, and the imports would fail.
#test_bgp.py
import pytest
from nornir import InitNornir
from nornir.core.filter import F
from helper_functions.nornir_collect import collect_core_eos
nr = InitNornir(config_file="nornir_files/config.yml")
nr_filtered = nr.filter(F(groups__contains="eos") & F(groups__contains="core"))
results = nr_filtered.run(task=collect_core_eos)
bgp_peers = [
(host, peer_ip, peer_info)
for host in results
for peer_ip, peer_info in nr.inventory.hosts[host]["bgp_summary"].items()
]
@pytest.mark.parametrize(
"host,peer_ip,peer_info",
bgp_peers,
ids=[f"{host}-{peer_ip}" for host, peer_ip, _ in bgp_peers],
)
def test_bgp_peer_state_established(host, peer_ip, peer_info):
assert peer_info["peerState"] == "Established", (
f"{host}: BGP peer {peer_ip} is not Established, state: {peer_info['peerState']}"
)#test_ospf.py
import pytest
from nornir import InitNornir
from nornir.core.filter import F
from helper_functions.nornir_collect import collect_core_eos
nr = InitNornir(config_file="nornir_files/config.yml")
nr_filtered = nr.filter(F(groups__contains="eos") & F(groups__contains="core"))
results = nr_filtered.run(task=collect_core_eos)
ospf_neighbors = [
(host, neighbor)
for host in results
for neighbor in nr.inventory.hosts[host]["ospf_neighbors"]
]
@pytest.mark.parametrize(
"host,neighbor",
ospf_neighbors,
ids=[f"{host}-{n['routerId']}" for host, n in ospf_neighbors],
)
def test_ospf_adjacency_full(host, neighbor):
assert neighbor["adjacencyState"] == "full", (
f"{host}: OSPF neighbor {neighbor['routerId']} is not full"
)Since we run pytest from the root of the project, all paths are relative to that root. So nornir_files/config.yml resolves correctly because it sits directly in the root directory.
The import works for the same reason. When pytest finds the conftest.py at the root, it adds the root to the Python path. So when test_bgp.py imports collect_core_eos from helper_functions.nornir_collect, Python looks for a directory called helper_functions in the root, finds it, and imports the function successfully.
Now we can run individual test files directly. If you only want to run the BGP tests, you can point pytest at that specific file.
pytest -v --tb=no test_network/test_bgp.py
And if you only want the OSPF tests.
pytest -v --tb=no test_network/test_ospf.py
Or you can still run everything at once by pointing at the directory.
pytest -v --tb=no test_network
Where We Are Now
It is worth taking a step back and looking at how far we have come since the start of this series.
- We started with a single test function that connected to one Arista device, looped through all BGP peers, and marked the entire test as failed if any one peer was down.
- From there, we introduced parametrization, which gave each peer and each neighbour its own independent test. This meant a single failure no longer hid the results of everything else. We could see exactly which peer or neighbour failed and which ones passed.
- We then moved the connection and collection logic out of the test file and into its own file, keeping the tests clean and focused on assertions only.
- After that, we replaced the hardcoded list of IP addresses with Nornir, which gave us proper inventory management, parallel execution, and the ability to filter devices by group.
- We also split the tests into separate files, one for BGP and one for OSPF, and organised the project into a proper directory structure with a
helper_functionsdirectory for collection logic and atest_networkdirectory for the tests. We also added an emptyconftest.pyat the root to make imports work correctly across the project.
In the next section, we will look at pytest fixtures, what they are and where they make sense in this kind of setup.
Pytest Fixtures
Right now, both test_bgp.py and test_ospf.py initialise Nornir twice and run the collection task independently. This means every time you run the full test suite, pytest connects to all the devices twice, once for each file. That is wasteful and will only get worse as you add more test files and more protocols.
One of the ways to fix this is to run the collection once and share the results across all test files. This is exactly the problem pytest fixtures are designed to solve.
A fixture is a function that pytest runs before your tests and makes the return value available to any test that needs it. Instead of each test file setting up its own Nornir instance, you define the setup once in a fixture, and pytest manages passing the data wherever it is needed. Fixtures also give you control over scope, meaning you can tell pytest to run the fixture once per test, once per file, or once for the entire test session.
New Tests
To properly demonstrate fixtures, we are going to use two new tests, test_version.py and test_mstp.py. These tests check the EOS version and whether spanning-tree mode mstp is present in the running config. The reason we are using these instead of the BGP and OSPF tests is that the new tests do not use parametrize, which makes them a much cleaner starting point for explaining fixtures. We will come back to parametrize and fixtures together in a later post.
For now, we are going to set the BGP and OSPF tests aside by renaming them so they no longer start with test_, which means pytest will not pick them up when running the tests. We also updated the collection function to run two new commands, show version and show running-config, and store the results on the host object as before. (I've removed the bgp, ospf commands)
#helper_functions/nornir_collect.py
def collect_core_eos(task):
commands = ["show version", "show running-config"]
node = collect_eapi_data(task)
output = node.enable(commands)
task.host["version"] = output[0]["result"]
task.host["running_config"] = output[1]["result"]We also updated the inventory to add a new group called functional and assigned r3 to it. For now we are only running these tests against one device. In groups.yml we just define functional as an empty group.
#hosts.yml
---
r1:
hostname: 192.168.200.101
groups:
- eos
- core
r2:
hostname: 192.168.200.102
groups:
- eos
- core
r3:
hostname: 192.168.200.103
groups:
- eos
- functional#groups.yml
---
eos:
platform: eos
core: {}
functional: {}We have two test files, test_version.py and test_mstp.py. Neither of them has any Nornir initialisation, collection logic, or imports related to device connectivity. The tests are clean and focused purely on assertions.
#test_version.py
def test_eos_version(nr_session):
for host in nr_session.inventory.hosts.values():
version_info = host["version"]
assert version_info["version"] == "4.34.2F-43232954.4342F.1 (engineering build)", (
f"{host.name}: EOS version is {version_info['version']}, expected 4.34.2F-43232954.4342F.1 (engineering build)"
)#test_mstp.py
def test_spanning_tree_mode_mstp(nr_session):
for host in nr_session.inventory.hosts.values():
running_config = host["running_config"]
assert "spanning-tree mode mstp" in running_config["cmds"], (
f"{host.name}: spanning-tree mode mstp not found in running config"
)The version test checks that the EOS version matches what we expect. The MSTP test does the same, checking that spanning-tree mode mstp is present in the running config.
Both test functions have nr_session as an argument. This is a fixture, and that is what we will look at next.
conftest.py
conftest.py is a special file that pytest looks for automatically. Any fixtures defined in it are available to all test files in the same directory and any subdirectories, without needing to import anything. This is what makes it the right place to put shared setups like the Nornir initialisation.
# conftest.py
import pytest
from nornir import InitNornir
from nornir.core.filter import F
from helper_functions.nornir_collect import collect_core_eos
@pytest.fixture(scope="session")
def nr_session():
nr = InitNornir(config_file="nornir_files/config.yml")
nr_filtered = nr.filter(F(groups__contains="eos") & F(groups__contains="functional"))
nr_filtered.run(task=collect_core_eos)
return nr_filteredThe fixture itself is a regular Python function decorated with @pytest.fixture. The scope="session" argument tells pytest to run this fixture only once for the entire test session, no matter how many test files or test functions use it. This is the key part. With session scope, it runs once, the result is cached, and every test that needs it gets the same object.
Inside the fixture, we initialise Nornir, filter the inventory down to hosts that belong to both the eos and functional groups, and run the collection task. We then return the filtered Nornir object. The return value is what pytest injects into any test function that has nr_session in its arguments.
So when test_version.py and test_mstp.py both declare nr_session as an argument, pytest runs the fixture once, caches the result, and passes the same object to both test files. The devices are only contacted once, the credentials are only fetched once, and both test files have access to the same collected data. This is exactly what we set out to achieve.
pytest starts
โ
โผ
finds conftest.py
โ
โผ
registers nr_session fixture (not run yet)
โ
โผ
collects test_version.py and test_mstp.py
โ
โผ
test_version.py::test_eos_version needs nr_session
โ
โผ
fixture runs for the first time
โโโ InitNornir
โโโ filter inventory
โโโ run collection task
โ
โผ
result cached for the session
โ
โโโ test_version.py::test_eos_version โโโ gets cached nr_session
โ
โโโ test_mstp.py::test_spanning_tree_mode_mstp โโโ gets same cached nr_sessionSpecifying Fixture Scope
You can use the --setup-show flag to show you when fixtures are set up and torn down. You can see that nr_session is set up once at the start, marked with S which stands for session scope. Both tests use the same fixture instance, and it is only torn down at the very end after all tests have finished. This confirms that the collection runs only once for the entire session, regardless of how many test files use the fixture.
pytest --setup-show --tb=short test_network
======================= test session starts =======================
platform darwin -- Python 3.12.10, pytest-9.0.3, pluggy-1.6.0
rootdir: blog/pytest_automated_tests
configfile: pytest.ini
plugins: allure-pytest-2.15.3
collected 2 items
test_network/test_mstp.py
SETUP S nr_session
test_network/test_mstp.py::test_spanning_tree_mode_mstp (fixtures used: nr_session).
test_network/test_version.py
test_network/test_version.py::test_eos_version (fixtures used: nr_session).
TEARDOWN S nr_session
======================== 2 passed in 0.10s ========================scope="session" to run it only once for the entire test session.If we don't specify a scope, pytest defaults to function scope, marked with F in the output below. You can see that nr_session is now set up and torn down for every single test function. This means Nornir initialises twice, connects to the devices twice, and runs the collection twice. In our case, with just two tests, it may not seem like a big deal, but imagine running this against hundreds of devices with dozens of test files. The difference in execution time would be significant.
@pytest.fixture()
def nr_session():
nr = InitNornir(config_file="nornir_files/config.yml")
nr_filtered = nr.filter(F(groups__contains="eos") & F(groups__contains="functional"))
nr_filtered.run(task=collect_core_eos)
return nr_filteredpytest --setup-show --tb=short test_network
======================= test session starts =======================
platform darwin -- Python 3.12.10, pytest-9.0.3, pluggy-1.6.0
rootdir: blog/pytest_automated_tests
configfile: pytest.ini
plugins: allure-pytest-2.15.3
collected 2 items
test_network/test_mstp.py
SETUP F nr_session
test_network/test_mstp.py::test_spanning_tree_mode_mstp (fixtures used: nr_session).
TEARDOWN F nr_session
test_network/test_version.py
SETUP F nr_session
test_network/test_version.py::test_eos_version (fixtures used: nr_session).
TEARDOWN F nr_session
======================== 2 passed in 0.18s ========================Quick Recap
So this is where we are now. The project has a clear separation of concerns, and everything has its own place. We will continue building on this structure as we add more tests.
.
โโโ conftest.py # shared fixtures, nornir initialisation lives here
โโโ helper_functions
โ โโโ nornir_collect.py # collection logic, connects to devices and stores data on host
โโโ nornir_files
โ โโโ config.yml # nornir config, inventory paths and runner settings
โ โโโ defaults.yml # default credentials
โ โโโ groups.yml # group definitions
โ โโโ hosts.yml # device inventory
โโโ test_network
โโโ atest_bgp.py # bgp tests, prefixed with atest_ so pytest ignores them for now
โโโ atest_ospf.py # ospf tests, prefixed with atest_ so pytest ignores them for now
โโโ test_mstp.py # checks spanning-tree mode mstp in running config
โโโ test_version.py # checks eos version on each deviceIf you compare this to the BGP and OSPF tests we wrote earlier, the difference is that in those files, each test file had its own Nornir initialisation and collection logic at the top. Here, all of that is gone from the test files completely. The tests only contain assertions, and everything else is managed by the fixture in conftest.py.
Please note that in the new tests we only ran against one device. Without parametrize, if you run these tests against multiple devices, you still get a single test result for all of them. This means if one device fails, the test is marked as failed, but you have no visibility into which specific device caused the failure. This is not ideal, and it is the same problem we solved earlier with parametrize for the BGP and OSPF tests. We will address this properly once we cover how to combine fixtures and parametrize together in a later post.
Why We Used New Tests to Explain Fixtures?
You might be wondering why we introduced two new tests instead of using the BGP and OSPF tests we already had. The reason comes down to how parametrize works. Parametrize needs the data to be available at collection time, which is before pytest resolves any fixtures. This means you cannot feed fixture data directly into a parametrize decorator.
The new tests loop through the hosts inside the test function itself, which means they receive the fixture after collection, and everything works as expected. We will look at how to properly combine fixtures and parametrize in a later post.
If we were to move the Nornir initialisation into a fixture and try to use it with parametrize, it would look something like this. (It won't work though)
# test_bgp.py
import pytest
# nr_session is available automatically from conftest.py
# but we cannot use it here because parametrize runs at collection time
# and the fixture has not been resolved yet
bgp_peers = [
(host, peer_ip, peer_info)
for host in nr_session # nr_session is just a function reference here, not the Nornir object
for peer_ip, peer_info in nr_session.inventory.hosts[host]["bgp_summary"].items()
]
@pytest.mark.parametrize(
"host,peer_ip,peer_info",
bgp_peers,
ids=[f"{host}-{peer_ip}" for host, peer_ip, _ in bgp_peers],
)
def test_bgp_peer_state_established(host, peer_ip, peer_info):
assert peer_info["peerState"] == "Established", (
f"{host}: BGP peer {peer_ip} is not Established, state: {peer_info['peerState']}"
)The problem is that nr_session here is just a function reference, not the actual Nornir object. The fixture has not been called yet because pytest has not started running tests. When pytest evaluates the parametrize decorator at collection time, it tries to iterate over nr_session and fails because it is a function, not an iterable. The fixture is only resolved later when the test functions actually run, which is too late for parametrize.
pytest starts
โ
โผ
COLLECTION PHASE
โ
โโโ finds conftest.py
โ โ
โ โโโ registers nr_session fixture (not run yet, just a function reference)
โ
โโโ finds test_bgp.py
โ โ
โ โโโ evaluates @pytest.mark.parametrize
โ โ
โ โโโ needs bgp_peers list NOW
โ โ
โ โโโ tries to use nr_session โโโ FAILS
โ fixture not resolved yet
โ
โผ
EXECUTION PHASE (never reached)
โ
โโโ fixtures resolved here โโโ too late for parametrizeClosing Up
Let us wrap up this post here. We covered a lot of ground, from the basics of pytest fixtures to restructuring the project into a cleaner directory structure. As always, the code we have written is not the final version and we will continue to improve it as the series progresses.


