Juniper vJunos-router in Containerlab
If you follow me or read my blog, you probably know I'm a big advocate of Containerlab. I've been using it for over two years now and I absolutely love it. Why? Because it's open source, it has an amazing community behind it (thank you again, Roman), and labs are defined using simple YAML files that are easy to share and reuse.
So far, I've used Cisco IOL, Arista EOS, and Palo Alto VM in Containerlab. And finally, the time came to try Juniper. I decided to test the Juniper vJunos-router, which is a virtualized MX router. It's a single-VM version of vMX that doesn't require any feature licenses and is meant for lab or testing purposes. You can even download the image directly from Juniper's website without needing an account. Thank you, Juniper and Cisco, please take note. In this post, I'll show you how to run Juniper vJunos-router in Containerlab.
Prerequisites
This post assumes you're somewhat familiar with Containerlab and already have it installed. If you're new, feel free to check out my introductory blog below. Containerlab also has great documentation on how to use vJunos-router, so be sure to check that out as well.
I'm running Containerlab on Ubuntu 22.04 Server, but the process would be the same for any other distro.
Downloading vJunos-router Image
The vJunos-router is a virtual Juniper router that runs Junos OS and is designed specifically for lab and testing purposes. It installs as a single VM on an x86 server and behaves just like a physical router in terms of configuration and management. Built using vMX as a reference, it includes a single Routing Engine and a single Flexible PIC Concentrator (FPC), with a bandwidth limit of 100 Mbps aggregated across all interfaces.
As I mentioned, head over to the Juniper portal and download the QCOW2 image. I went with the latest version, 24.2.
Something I noticed that even though it's a QCOW2 file, the extension it downloads with is .dms
, but you can just rename the file to .qcow2
. So, for example, rename vJunos-router-24.2R1-S2.5.dms
to vJunos-router-24.2R1-S2.5.qcow2
.
Preparing the Image
Next, you'll need to clone a repo (vrnetlab) to convert this file into a Docker image. This method is commonly used for Containerlab VM images. Vrnetlab packages a regular VM inside a container and makes it runnable as if it was a container image. To make this work, vrnetlab provides a set of scripts that build the container image out of a user-provided VM disk.
suresh@containerlab-01:~$ git clone https://github.com/hellt/vrnetlab.git
Cloning into 'vrnetlab'...
remote: Enumerating objects: 5798, done.
remote: Counting objects: 100% (1824/1824), done.
remote: Compressing objects: 100% (462/462), done.
remote: Total 5798 (delta 1581), reused 1383 (delta 1362), pack-reused 3974
Receiving objects: 100% (5798/5798), 2.48 MiB | 10.21 MiB/s, done.
Resolving deltas: 100% (3467/3467), done.
Once cloned, navigate to the vrnetlab/vjunosrouter
directory and move the file you just downloaded into that folder.
suresh@containerlab-01:~$ cd vrnetlab/vjunosrouter/
suresh@containerlab-01:~/vrnetlab/vjunosrouter$ ls
docker Makefile README.md vJunos-router-24.2R1-S2.5.qcow2
Inside the folder, just run make
suresh@containerlab-01:~/vrnetlab/vjunosrouter$ make
for IMAGE in vJunos-router-24.2R1-S2.5.qcow2; do \
echo "Making $IMAGE"; \
make IMAGE=$IMAGE docker-build; \
make IMAGE=$IMAGE docker-clean-build; \
done
Making vJunos-router-24.2R1-S2.5.qcow2
make[1]: Entering directory '/home/suresh/vrnetlab/vjunosrouter'
It will take a few minutes to complete. Once it's done, run docker images
and you should see the vjunos-router
image listed.
suresh@containerlab-01:~/vrnetlab/vjunosrouter$ docker images
REPOSITORY TAG IMAGE ID SIZE
vrnetlab/juniper_vjunos-router 24.2R1-S2.5 79cdcd6e1ca6 4.17GB
Starting the Lab
From this point on, just create a lab the way you normally would. For this example, I'm going to create a couple of routers and here's the lab topology.
---
name: junos-labs
mgmt:
network: mgmt
ipv4-subnet: 192.168.100.0/24
topology:
kinds:
juniper_vjunosrouter:
image: vrnetlab/juniper_vjunos-router:24.2R1-S2.5
nodes:
vmx-01:
kind: juniper_vjunosrouter
mgmt-ipv4: 192.168.100.121
vmx-02:
kind: juniper_vjunosrouter
mgmt-ipv4: 192.168.100.122
links:
- endpoints: ["vmx-01:ge-0/0/0", "vmx-02:ge-0/0/0"]
The interface naming convention is ge-0/0/X
(you can also use et-0/0/X
or xe-0/0/X
, all are accepted), where X
denotes the port number. So, ge-0/0/0
is the first available data port, ge-0/0/1
is the second, and so on.
Once you have the lab defined, just run the lab with the usual Containerlab deploy command.
sudo containerlab deploy -t junos.yml
It may take a few minutes for the routers to come online, and you should be able to SSH using the default admin/admin@123
credentials. I'm just setting up a simple point-to-point link, and as you can see, I can ping between the routers.
admin@vmx-01> configure
Entering configuration mode
[edit]
admin@vmx-01#set interfaces ge-0/0/0 unit 0 family inet address 10.15.15.1/24
admin@vmx-01# show | compare
[edit interfaces]
+ ge-0/0/0 {
+ unit 0 {
+ family inet {
+ address 10.15.15.1/24;
+ }
+ }
+ }
admin@vmx-02> configure
Entering configuration mode
[edit]
admin@vmx-02# set interfaces ge-0/0/0 unit 0 family inet address 10.15.15.2/24
[edit]
admin@vmx-02# show | compare
[edit interfaces]
+ ge-0/0/0 {
+ unit 0 {
+ family inet {
+ address 10.15.15.2/24;
+ }
+ }
+ }
[edit]
admin@vmx-01> ping 10.15.15.2
PING 10.15.15.2 (10.15.15.2): 56 data bytes
64 bytes from 10.15.15.2: icmp_seq=0 ttl=64 time=43.992 ms
64 bytes from 10.15.15.2: icmp_seq=1 ttl=64 time=2.160 ms
64 bytes from 10.15.15.2: icmp_seq=2 ttl=64 time=3.023 ms
^C
--- 10.15.15.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 2.160/16.392/43.992/19.520 ms
Containerlab VSCode Extension
If you have the Containerlab VSCode extension, you can also deploy the lab from there. Using the extension, you can generate topology diagrams and more. I highly recommend giving the extension a try.
Resource Usage
I allocated 4 vCPU cores and 16 GB of RAM to my Containerlab VM, which runs on Proxmox. When I ran two vJunos-routers, the resources were almost fully used, as you can see in the screenshots. I checked both Proxmox and htop
for usage stats. So if you're planning to run larger labs, make sure to allocate enough resources to avoid performance issues.
That being said, I didn’t feel any lag on the VM or notice any slowness while working. I didn’t spend a huge amount of time on it, but if I come across anything worth noting, I’ll update it here.
vJunos-router Startup Config
It’s possible to make vJunos-router nodes boot up with a user-defined startup config instead of the built-in one. Using the startup-config
property of the node or kind, you can set the path to the config file that will be mounted into the container and used at startup.
What I did was copy the config from vmx-01
, changed the IP of ge-0/0/0.0
to 10.15.15.3/24
as shown in the snippet below, and saved this config as vmx-01-config.txt
in the same directory as the topology file.
I then destroyed the lab with the --cleanup
flag, which removes the containers and their configs.
sudo containerlab destroy -t junos.yml --cleanup
After that, I modified the topology file to point vmx-01
to use this startup config and redeployed the lab.
---
name: junos-labs
mgmt:
network: mgmt
ipv4-subnet: 192.168.100.0/24
topology:
kinds:
juniper_vjunosrouter:
image: vrnetlab/juniper_vjunos-router:24.2R1-S2.5
nodes:
vmx-01:
kind: juniper_vjunosrouter
mgmt-ipv4: 192.168.100.121
startup-config: vmx-01-config.txt
vmx-02:
kind: juniper_vjunosrouter
mgmt-ipv4: 192.168.100.122
links:
- endpoints: ["vmx-01:ge-0/0/0", "vmx-02:ge-0/0/0"]
As expected, vmx-01
booted with the custom config. This is another useful feature if you want to share the lab with someone, you just need to back up the config and include it with the lab files. How cool is that?
admin@vmx-01> show interfaces terse
Interface Admin Link Proto Local Remote
ge-0/0/0 up up
ge-0/0/0.0 up up inet 10.15.15.3/24
multiservice
lc-0/0/0 up up
lc-0/0/0.32769 up up vpls
pfe-0/0/0 up up
pfe-0/0/0.16383 up up inet
inet6
pfh-0/0/0 up up
pfh-0/0/0.16383 up up inet
Closing Up
That’s a quick look at how you can get vJunos-router running in Containerlab. It’s a simple and clean setup once you get the image converted and the lab defined. I also tested running this on macOS with the M3 Pro chip, but it didn’t work. I ran into the following error: Error: CPU virtualization support is required for node
. If you find a way to run it, please let me know in the comments.