Containerlab, the future of your virtual network lab (Part 2)

Containerlab original image of demostration
cotntainerlab

In the previous post, we learned how to install and set up containerlab. Now it is time to add some networking images and build our topology.

In this instance, I will be using the Arista cEOS image, as I do not have access to Nokia or Juniper images, but this should be a good representation of how we can add images and build lab topologies

Adding images to Containerlab

Let’s start by learning how to add an image to our containerlab. As mentioned above, we will be adding the Arista cEOS image, which you can download at Arista’s website (You will need an account to download. If you do not have one, you can create one for free)
The version I will be using is cEOS-lab-4.25.4M

Once you have downloaded the image, we will proceed to add it to our docker images repository. To do this, using the terminal move to the location of the downloaded tar.xz file and enter the following commands:

# In my case, the files are under the /home/$USER/Downloads directory
cd Downloads

sudo docker import cEOS-lab-4.25.4M.tar.xz ceos:4.25.4M

This will take a couple of minutes as docker imports the image and builds the container. When the process is completed you should see a sha256 hash show up in the terminal that looks like this:

image of output after importing cEOS image into docker repo

Why are we adding the cEOS image to docker? you may ask.

Well, containerlab uses the Docker image repository to build the actual containers that we want to run on our lab topology.

This is one of the reasons why you can run any type of container on containerlab.

From Linux to images you can build yourself. On top of that, if the image is not locally stored, containerlab will use docker’s image repository (docker hub) to download any images that you refer to automatically.

Building our first lab

Now that we have successfully added the cEOS docker image to our system it’s time to build a lab. And for this, we will need to create a YAML file to define our nodes and how they will connect to each other.

YAML is a data serialization language that allows us to define objects and structures for our code to follow but in a human-friendly manner.

If you are not familiar with YAML and want to learn more about it, you can find a lot of documentation at https://www.tutorialspoint.com/yaml/index.htm

Basic concepts we need to understand

Before we go and build our topology and lab, here are some basic concepts we need to know:

  1. As mentioned above, our topology definition code will be written in YAML format
  2. Containerlab uses a file name structure that we need to follow so the topology will be read and created. This structure is any_name_here.clab.yml. Note that the required part of this is the clab.yml anything in front is fine
  3. There are specific parameters that we need to pass inside the topology file for everything to work, and we will see those parameters on the example file
  4. You can have as many nodes as your system supports inside one configuration file

Building the lab for real

Okay, we have read a lot and done little, so let’s build our topology.
Here is what I want to accomplish with this:

Diagram of how the cEOS and CentOS containers will be connected to each other

With this topology in mind, let’s create the YAML configuration file and then go over it
I will be working under /home/$USER/Documents/containerlab-demo/ for this example
So to create the file I entered the following command:

touch jtechclass-clab-example1.clab.yml

vim jtechclass-clab-example1.clab.yml

Then I have entered the following configuration:

name: jtechclass-contanerlab-example1

topology:
  nodes:
    ceos1:
      kind: ceos
      image: ceos:4.25.4M
    ceos2:
      kind: ceos
      image: ceos:4.25.4M
    centos1:
      kind: linux
      image: centos:8
    centos2:
      kind: linux
      image: centos:8

  links:
    - endpoints: ["ceos1:eth1","ceos2:eth1"]
    - endpoints: ["ceos1:eth2","centos1:eth1"]
    - endpoints: ["ceos2:eth2","centos2:eth1"]

This is a very simplistic topology, but it will do for our example. So what is all this none sense we are seeing up there?
Well, for containerlab to create all the nodes and to connect them, we need to use a language that it can understand and that it expects, for that reason, we pass the parameters that are necessary and required

Let’s add the same we have above here, but with some comments

# Name is a required directive that allows containerlab to separate multiple labs running at once from each other
name: jtechclass-containerlab-example1

# The topology key here is the main aspect of the whole thing, it tells containerlab what devices to create and how to link them together
topology:
  # The nodes directive is that we can use to tell container lab what our nodes will be named and what images and type of nodes they are
  nodes:
    # This is our first node, which will use the name ceos1 and its of a type cEOS with the image referenced to ceos:4.25.4M
    # you can see we have 2 instances of this node defined here
    ceos1:
      kind: ceos
      image: ceos:4.25.4M
    ceos2:
      kind: ceos
      image: ceos:4.25.4M
    # This is how we are declaring our client nodes, basically we are telling containerlab to go to
    # hub.docker.com and download the centos:8 image and then create it locally (we also have 2 instances of this one)
    centos1:
      kind: linux
      image: centos:8
    centos2:
      kind: linux
      image: centos:8

  # links is right under topology, and it's what will allow us to connect our containers to each other
  links:
    - endpoints: ["ceos1:eth1","ceos2:eth1"]
    - endpoints: ["ceos1:eth2","centos1:eth1"]
    - endpoints: ["ceos2:eth2","centos2:eth1"]

For the links side, as you can notice, we pass the “nodeName:interface” and then we use a coma to signal that it will connect to the other “nodeName:interface”that way it looks the same as on our diagram above

Deploying our lab

After we have completed building our topology and lab definition, it is time to deploy it and get it running, to do that we need to run the following command and it should provide the following output:

sudo clab deploy -t jtechclass-clab-example1.clab.yml
INFO[0000] Parsing & checking topology file: jtechclass-clab-example1.clab.yml
INFO[0000] Pulling docker.io/library/centos:8 Docker image
INFO[0008] Done pulling docker.io/library/centos:8
INFO[0008] Creating lab directory: /home/container/Documents/containerlab-demo/clab-jtechclass-contanerlab-example1
INFO[0008] Creating docker network: Name='clab', IPv4Subnet='172.20.20.0/24', IPv6Subnet='2001:172:20:20::/64', MTU='1500'
INFO[0008] Creating container: centos2
INFO[0008] Creating container: ceos1
INFO[0008] Creating container: ceos2
INFO[0008] Creating container: centos1
INFO[0010] Restarting 'ceos2' node
INFO[0010] Restarting 'ceos1' node
INFO[0010] Creating virtual wire: ceos1:eth1 <--> ceos2:eth1
INFO[0010] Creating virtual wire: ceos2:eth2 <--> centos2:eth1
INFO[0010] Creating virtual wire: ceos1:eth2 <--> centos1:eth1
INFO[0010] Writing /etc/hosts file
+---+----------------------------------------------+--------------+--------------+-------+-------+---------+----------------+----------------------+
| # |                     Name                     | Container ID |    Image     | Kind  | Group |  State  |  IPv4 Address  |     IPv6 Address     |
+---+----------------------------------------------+--------------+--------------+-------+-------+---------+----------------+----------------------+
| 1 | clab-jtechclass-contanerlab-example1-centos1 | 2a9de7d94b11 | centos:8     | linux |       | running | 172.20.20.3/24 | 2001:172:20:20::3/64 |
| 2 | clab-jtechclass-contanerlab-example1-centos2 | 5f36b4ee4a72 | centos:8     | linux |       | running | 172.20.20.5/24 | 2001:172:20:20::5/64 |
| 3 | clab-jtechclass-contanerlab-example1-ceos1   | ea8fd430baaa | ceos:4.25.4M | ceos  |       | running | 172.20.20.4/24 | 2001:172:20:20::4/64 |
| 4 | clab-jtechclass-contanerlab-example1-ceos2   | bdfffa69e1d6 | ceos:4.25.4M | ceos  |       | running | 172.20.20.2/24 | 2001:172:20:20::2/64 |
+---+----------------------------------------------+--------------+--------------+-------+-------+---------+----------------+----------------------+

If you received that output, then we are on the right path. This means that all our containers are running properly and were created. Now there are some IPv4 Address and IPv6 Address columns there. Where did that come from?
By default Containerlab will create a new docker network with a subnet and connect each container to that subnet for out-of-band management, these IPv4 and IPv6 Addresses are not at all on our actual devices other than the management side, and we will see that in a minute

Checking our lab topology

Containerlab offers a great tool to be able to visualize the way we have connected our containers, to do that let us enter the following command:

# we are going to call the clab command with the graph parameter and pass the -t flag to specify our topology file
# This is going to create a local web server that you can check at http://localhost:50080
sudo clab graph -t jtechclass-clab-example1.clab.yml
INFO[0000] Parsing & checking topology file: jtechclass-clab-example1.clab.yml
INFO[0000] Listening on :50080...

If you go to http://localhost:50080 you should see something like this:

Image of containerlab autogenerated graphic topology

Is this not great? I mean, other than reading this long post, it takes very little time to build this topology. But will this work by passing traffic?
Let’s figure that out

Connecting to the lab devices

After we have created and deployed our lab topology, it is time to check if we can connect to our devices and configure them, after all, what’s the point of a lab if I cannot enter commands on the CLI? right?
We can enter the cEOS containers Cli and the CentOS containers bash by entering the following command:

# In this first example I'm using the name of the container, which you can see on the table above after we entered the clab deploy command

sudo docker exec -it clab-jtechclass-containerlab-example1-ceos1 Cli
# This command should take you into enable mode (>)
ceos1>
ceos1>en
ceos1#conf t
ceos1(config)#exit
ceos1#show int status
Port       Name   Status       Vlan     Duplex Speed  Type            Flags Encapsulation
Et1               connected    1        full   unconf EbraTestPhyPort
Et2               connected    1        full   unconf EbraTestPhyPort
Ma0               connected    routed   a-full a-1G   10/100/1000

ceos1#show lldp neighbors
Last table change time   : 0:15:13 ago
Number of table inserts  : 2
Number of table deletes  : 0
Number of table drops    : 0
Number of table age-outs : 0

Port          Neighbor Device ID       Neighbor Port ID    TTL
---------- ------------------------ ---------------------- ---
Et1           ceos2                    Ethernet1           120
Ma0           ceos2                    Management0         120

Now let us configure some IPs on each cEOS device and each Centos device and try ping from one client to the other

# we are going to connect to the arista container and add the following config
sudo docker exec -it clab-jtechclass-contanerlab-example1-ceos1 Cli
ceos1>en
ceos1#conf t
ceos1(config)#ip routing
ceos1(config)#interface ethernet1
ceos1(config-if-Et1)#no switchport
ceos1(config-if-Et1)#ip address 10.10.10.0/31
ceos1(config-if-Et1)#no shut
ceos1(config-if-Et1)#exit
ceos1(config)#ip route 20.20.20.0/24 10.10.10.1
ceos1(config)#interface ethernet2
ceos1(config-if-Et2)#description Connection to Centos1 Client
ceos1(config-if-Et2)#no switchport
ceos1(config-if-Et2)#ip address 10.100.1.1/24
ceos1(config-if-Et2)#exit
ceos1(config)#exit

# Now we are going to do the same with the second arista ceos container
container@ubuntu:~/Documents/containerlab-demo$ sudo docker exec -it clab-jtechclass-contanerlab-example1-ceos2 Cli
ceos2>en
ceos2#conf t
ceos1(config)#ip routing
ceos2(config)#interface eth1
ceos2(config-if-Et1)#description cEOS1 Eth1
ceos2(config-if-Et1)#no switchport
ceos2(config-if-Et1)#ip address 10.10.10.1/31
ceos2(config-if-Et1)#no shut
ceos2(config-if-Et1)#exit
ceos2(config)#interface ethernet2
ceos2(config-if-Et2)#description Centos2 Client
ceos2(config-if-Et2)#no switchport
ceos2(config-if-Et2)#ip address 20.20.20.1/24
ceos2(config-if-Et2)#exit
ceos2(config)#ip route 10.100.1.0/24 10.10.10.0
ceos2(config)#exit
ceos2#exit

# It is time to configure our centos devices, let's start with the first centos1
container@ubuntu:~/Documents/containerlab-demo$ sudo docker exec -it clab-jtechclass-contanerlab-example1-centos1 bash
[root@centos1 /]# ip address add 10.100.1.2/24 dev eth1
[root@centos1 /]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:14:14:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.20.20.3/24 brd 172.20.20.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2001:172:20:20::3/64 scope global nodad
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe14:1403/64 scope link
       valid_lft forever preferred_lft forever
17: eth1@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue state UP group default
    link/ether aa:c1:ab:bd:fb:83 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 10.100.1.2/24 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a8c1:abff:febd:fb83/64 scope link
       valid_lft forever preferred_lft forever
[root@centos1 /]# ip route
default via 172.20.20.1 dev eth0
10.100.1.0/24 dev eth1 proto kernel scope link src 10.100.1.2
172.20.20.0/24 dev eth0 proto kernel scope link src 172.20.20.3
[root@centos1 /]# ip route del default
[root@centos1 /]# ip route add default via 10.100.1.1 dev eth1
[root@centos1 /]# ip route
default via 10.100.1.1 dev eth1
10.100.1.0/24 dev eth1 proto kernel scope link src 10.100.1.2
172.20.20.0/24 dev eth0 proto kernel scope link src 172.20.20.3
[root@centos1 /]# exit

# Now let's configure our seconds container
container@ubuntu:~/Documents/containerlab-demo$ sudo docker exec -it clab-jtechclass-contanerlab-example1-centos2 bash
[root@centos2 /]# ip address add 20.20.20.2/24 dev eth1
[root@centos2 /]# ip route del default
[root@centos2 /]# ip route add default via 20.20.20.1 dev eth1
[root@centos2 /]# exit
exit

So what did we accomplish here?

  1. Configured a point-to-point connection between the two cEOS containers with a 10.10.10.0/31 subnet
  2. Configured the second interface on each cEOS container with a /24 to connect the Centos devices with
  3. Added a static Route on each cEOS container that points to the subnet that connects the Centos devices
  4. Configured a new IP address on each Cento’s device on interface Eth1 (interface connected to cEOS Ethernet2 on each instance)
  5. Removed the default route that pointed to the docker management network created by containerlab
  6. Created a new default route via the Eth1 interface which connects to the cEOS containers on Ethernet2

So if we now log into the centos1 container and try to ping the centos2 container’s IP it should work, so let’s test that:

sudo docker exec -it clab-jtechclass-contanerlab-example1-centos1 bash
[root@centos1 /]# ping 20.20.20.2
PING 20.20.20.2 (20.20.20.2) 56(84) bytes of data.
64 bytes from 20.20.20.2: icmp_seq=1 ttl=62 time=0.180 ms
64 bytes from 20.20.20.2: icmp_seq=2 ttl=62 time=0.130 ms
64 bytes from 20.20.20.2: icmp_seq=3 ttl=62 time=0.148 ms
64 bytes from 20.20.20.2: icmp_seq=4 ttl=62 time=0.130 ms
64 bytes from 20.20.20.2: icmp_seq=5 ttl=62 time=0.149 ms
64 bytes from 20.20.20.2: icmp_seq=6 ttl=62 time=0.108 ms
64 bytes from 20.20.20.2: icmp_seq=7 ttl=62 time=0.161 ms
64 bytes from 20.20.20.2: icmp_seq=8 ttl=62 time=0.164 ms
64 bytes from 20.20.20.2: icmp_seq=9 ttl=62 time=0.146 ms
64 bytes from 20.20.20.2: icmp_seq=10 ttl=62 time=0.132 ms
64 bytes from 20.20.20.2: icmp_seq=11 ttl=62 time=0.133 ms
64 bytes from 20.20.20.2: icmp_seq=12 ttl=62 time=0.184 ms
64 bytes from 20.20.20.2: icmp_seq=13 ttl=62 time=0.159 ms
64 bytes from 20.20.20.2: icmp_seq=14 ttl=62 time=0.154 ms
64 bytes from 20.20.20.2: icmp_seq=15 ttl=62 time=0.157 ms
64 bytes from 20.20.20.2: icmp_seq=16 ttl=62 time=0.213 ms
64 bytes from 20.20.20.2: icmp_seq=17 ttl=62 time=0.141 ms
64 bytes from 20.20.20.2: icmp_seq=18 ttl=62 time=0.161 ms
64 bytes from 20.20.20.2: icmp_seq=19 ttl=62 time=0.160 ms
64 bytes from 20.20.20.2: icmp_seq=20 ttl=62 time=0.144 ms
64 bytes from 20.20.20.2: icmp_seq=21 ttl=62 time=0.152 ms
64 bytes from 20.20.20.2: icmp_seq=22 ttl=62 time=0.122 ms
64 bytes from 20.20.20.2: icmp_seq=23 ttl=62 time=0.141 ms
64 bytes from 20.20.20.2: icmp_seq=24 ttl=62 time=0.176 ms
64 bytes from 20.20.20.2: icmp_seq=25 ttl=62 time=0.151 ms
64 bytes from 20.20.20.2: icmp_seq=26 ttl=62 time=0.134 ms
64 bytes from 20.20.20.2: icmp_seq=27 ttl=62 time=0.146 ms
^C
--- 20.20.20.2 ping statistics ---
27 packets transmitted, 27 received, 0% packet loss, time 636ms
rtt min/avg/max/mdev = 0.108/0.150/0.213/0.026 ms

And we can see that this was successful!!!!!!

How do we stop it?

We do not want to leave all these containers running on our system indefinitely. So how do we shut down our lab?

It is very simple, all we need to do is use the following command:

sudo clab destroy -t jtechclass-clab-example1.clab.yml

When we enter this command, Containerlab will stop the containers and leave all the configurations we created in place in the folder where our topology file is located

If we want to remove all the configuration from the directory and only leave our topology file then use the –cleanup option on the same command

Example:

# Notice that here we are not using the destroy word, but just des, this works as well
sudo clab des -t jtechclass-clab-example1.clab.yml --cleanup

Where to go from here?

In the next post (Part 3) we will be talking about how can we have this configuration if the arista devices are loaded at startup, and how we can make the centos containers run commands for us after the container starts to run. That way we will not need to bother ourselves with copying and pasting and entering the same configurations over and over again, so stay tuned for that.
Thanks for reading

3 comments

Leave a Reply

%d bloggers like this:
Verified by MonsterInsights