Skip to content

Demo on VLAB

Goals

The goal of this demo is to show how to use VPCs, attach and peer them and run test connectivity between the servers. Examples are based on the VLAB topology described in the Running VLAB section.

VLAB Topology

Spine-Leaf

The topology contains 2 spines, 2 ESLAG leaves, 1 orphan leaf, and a gateway as shown below:

graph TD

%% Style definitions
classDef gateway fill:#FFF2CC,stroke:#999,stroke-width:1px,color:#000
classDef spine   fill:#F8CECC,stroke:#B85450,stroke-width:1px,color:#000
classDef leaf    fill:#DAE8FC,stroke:#6C8EBF,stroke-width:1px,color:#000
classDef server  fill:#D5E8D4,stroke:#82B366,stroke-width:1px,color:#000
classDef mclag   fill:#F0F8FF,stroke:#6C8EBF,stroke-width:1px,color:#000
classDef eslag   fill:#FFF8E8,stroke:#CC9900,stroke-width:1px,color:#000
classDef external fill:#FFCC99,stroke:#D79B00,stroke-width:1px,color:#000
classDef hidden fill:none,stroke:none
classDef legendBox fill:white,stroke:#999,stroke-width:1px,color:#000

%% Network diagram
subgraph Gateways[" "]
    direction LR
    Gateway_1["gateway-1"]
end

subgraph Spines[" "]
    direction LR
    subgraph Spine_01_Group [" "]
        direction TB
        Spine_01["spine-01<br>spine"]
    end
    subgraph Spine_02_Group [" "]
        direction TB
        Spine_02["spine-02<br>spine"]
    end
end

subgraph Leaves[" "]
    direction LR
    subgraph Eslag_1 ["eslag-1"]
        direction LR
        Leaf_01["leaf-01<br>server-leaf"]
        Leaf_02["leaf-02<br>server-leaf"]
    end

    Leaf_03["leaf-03<br>server-leaf"]
end

subgraph Servers[" "]
    direction TB
    Server_03["server-03"]
    Server_01["server-01"]
    Server_02["server-02"]
    Server_04["server-04"]
    Server_05["server-05"]
    Server_06["server-06"]
end

%% Connections

%% Gateway connections
Gateway_1 ---|"enp2s2↔E1/7"| Spine_02
Gateway_1 ---|"enp2s1↔E1/7"| Spine_01

%% Spine_01 -> Leaves
Spine_01 ---|"E1/4↔E1/1<br>E1/5↔E1/2"| Leaf_01
Spine_01 ---|"E1/6↔E1/4<br>E1/5↔E1/3"| Leaf_02
Spine_01 ---|"E1/4↔E1/5<br>E1/5↔E1/6"| Leaf_03

%% Spine_02 -> Leaves
Spine_02 ---|"E1/7↔E1/3<br>E1/8↔E1/4"| Leaf_02
Spine_02 ---|"E1/6↔E1/1<br>E1/7↔E1/2"| Leaf_01
Spine_02 ---|"E1/6↔E1/5<br>E1/7↔E1/6"| Leaf_03

%% Leaves -> Servers
Leaf_01 ---|"enp2s1↔E1/2"| Server_02
Leaf_01 ---|"enp2s1↔E1/1"| Server_01
Leaf_01 ---|"enp2s1↔E1/3"| Server_03

Leaf_02 ---|"enp2s1↔E1/3<br>enp2s2↔E1/4"| Server_04
Leaf_02 ---|"enp2s2↔E1/2"| Server_02
Leaf_02 ---|"enp2s2↔E1/1"| Server_01

Leaf_03 ---|"enp2s1↔E1/2<br>enp2s2↔E1/3"| Server_06
Leaf_03 ---|"enp2s1↔E1/1"| Server_05

%% Mesh connections

%% External connections

subgraph Legend["Network Connection Types"]
    direction LR
    %% Create invisible nodes for the start and end of each line
    L1(( )) --- |"Fabric Links"| L2(( ))
    L5(( )) --- |"Bundled Server Links (x2)"| L6(( ))
    L7(( )) --- |"Unbundled Server Links"| L8(( ))
    L9(( )) --- |"ESLAG Server Links"| L10(( ))
    L11(( )) --- |"Gateway Links"| L12(( ))
    P1(( )) --- |"Label Notation: Downstream ↔ Upstream"| P2(( ))
end

class Gateway_1 gateway
class Spine_01,Spine_02 spine
class Leaf_01,Leaf_02,Leaf_03 leaf
class Server_03,Server_01,Server_02,Server_04,Server_05,Server_06 server
class Eslag_1 eslag
class L1,L2,L3,L4,L5,L6,L7,L8,L9,L10,L11,L12,P1,P2 hidden
class Legend legendBox
linkStyle default stroke:#666,stroke-width:2px
linkStyle 0,1 stroke:#CC9900,stroke-width:2px
linkStyle 2,3,4,5,6,7 stroke:#CC3333,stroke-width:4px
linkStyle 11,14 stroke:#66CC66,stroke-width:4px
linkStyle 8,9,12,13 stroke:#CC9900,stroke-width:4px,stroke-dasharray:5 5
linkStyle 10,15 stroke:#999999,stroke-width:2px
linkStyle 16 stroke:#B85450,stroke-width:2px
linkStyle 17 stroke:#82B366,stroke-width:2px
linkStyle 18 stroke:#000000,stroke-width:2px
linkStyle 19 stroke:#CC9900,stroke-width:2px,stroke-dasharray:5 5
linkStyle 20 stroke:#CC9900,stroke-width:2px
linkStyle 21 stroke:#FFFFFF

%% Make subgraph containers invisible
style Gateways fill:none,stroke:none
style Spines fill:none,stroke:none
style Leaves fill:none,stroke:none
style Servers fill:none,stroke:none
style Spine_01_Group fill:none,stroke:none
style Spine_02_Group fill:none,stroke:none

Utility based VPC creation

Setup VPCs

hhfab includes a utility to create VPCs in vlab. This utility is a hhfab vlab sub-command, hhfab vlab setup-vpcs.

NAME:
   hhfab vlab setup-vpcs - setup VPCs and VPCAttachments for all servers and configure networking on them

USAGE:
   hhfab vlab setup-vpcs [command options]

OPTIONS:
   --dns-servers value, --dns value [ --dns-servers value, --dns value ]    DNS servers for VPCs advertised by DHCP
   --force-cleanup, -f                                                      start with removing all existing VPCs and VPCAttachments (default: false)
   --hash-policy value, --hash value                                        xmit_hash_policy for bond interfaces on servers [layer2|layer2+3|layer3+4|encap2+3|encap3+4|vlan+srcmac] (default: "layer2+3")
   --help, -h                                                               show help
   --interface-mtu value, --mtu value                                       interface MTU for VPCs advertised by DHCP (default: 0)
   --ipns value                                                             IPv4 namespace for VPCs (default: "default")
   --keep-peerings, --peerings                                              Do not delete all VPC, External and Gateway peerings before enforcing VPCs (default: false)
   --name value, -n value                                                   name of the VM or HW to access
   --servers-per-subnet value, --servers value                              number of servers per subnet (default: 1)
   --subnets-per-vpc value, --subnets value                                 number of subnets per VPC (default: 1)
   --time-servers value, --ntp value [ --time-servers value, --ntp value ]  Time servers for VPCs advertised by DHCP
   --vlanns value                                                           VLAN namespace for VPCs (default: "default")
   --vpc-mode value, --mode value                                           VPC mode: empty (l2vni) by default or l3vni, etc
   --wait-switches-ready, --wait                                            wait for switches to be ready before and after configuring VPCs and VPCAttachments (default: true)

   Global options:

   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu/hhfab") [$HHFAB_WORK_DIR]

Setup Peering

hhfab includes a utility to create VPC peerings in VLAB. This utility is a hhfab vlab sub-command, hhfab vlab setup-peerings.

NAME:
   hhfab vlab setup-peerings - setup VPC and External Peerings per requests (remove all if empty)

USAGE:
   Setup test scenario with VPC/External Peerings by specifying requests in the format described below.

   Example command:

   $ hhfab vlab setup-peerings 1+2 2+4:r=border 1~as5835 2~as5835:subnets=sub1,sub2:prefixes=0.0.0.0/0,22.22.22.0/24

   Which will produce:
   1. VPC peering between vpc-01 and vpc-02
   2. Remote VPC peering between vpc-02 and vpc-04 on switch group named border
   3. External peering for vpc-01 with External as5835 with default vpc subnet and any routes from external permitted
   4. External peering for vpc-02 with External as5835 with subnets sub1 and sub2 exposed from vpc-02 and default route
      from external permitted as well any route that belongs to 22.22.22.0/24

   VPC Peerings:

   1+2 -- VPC peering between vpc-01 and vpc-02
   demo-1+demo-2 -- VPC peering between vpc-demo-1 and vpc-demo-2
   1+2:r -- remote VPC peering between vpc-01 and vpc-02 on switch group if only one switch group is present
   1+2:r=border -- remote VPC peering between vpc-01 and vpc-02 on switch group named border
   1+2:remote=border -- same as above

   External Peerings:

   1~as5835 -- external peering for vpc-01 with External as5835
   1~ -- external peering for vpc-1 with external if only one external is present for ipv4 namespace of vpc-01, allowing
     default subnet and any route from external
   1~:subnets=default@prefixes=0.0.0.0/0 -- external peering for vpc-1 with auth external with default vpc subnet and
     default route from external permitted
   1~as5835:subnets=default,other:prefixes=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same but with more details
   1~as5835:s=default,other:p=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same as above

OPTIONS:
   --help, -h                     show help
   --name value, -n value         name of the VM or HW to access
   --wait-switches-ready, --wait  wait for switches to be ready before and after configuring peerings (default: true)

   Global options:

   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu/hhfab") [$HHFAB_WORK_DIR]

Test Connectivity

hhfab includes a utility to test connectivity between servers inside VLAB. This utility is a hhfab vlab sub-command. hhfab vlab test-connectivity.

NAME:
   hhfab vlab test-connectivity - test connectivity between servers

USAGE:
   hhfab vlab test-connectivity [command options]

OPTIONS:
   --all-servers, --all                                                   requires all servers to be attached to a VPC (default: false)
   --curls value                                                          number of curl tests to run for each server to test external connectivity (0 to disable) (default: 3)
   --destination value, --dst value [ --destination value, --dst value ]  server to use as destination for connectivity tests (default: all servers)
   --dscp value                                                           DSCP value to use for iperf3 tests (0 to disable DSCP) (default: 0)
   --help, -h                                                             show help
   --iperfs value                                                         seconds of iperf3 test to run between each pair of reachable servers (0 to disable) (default: 10)
   --iperfs-speed value                                                   minimum speed in Mbits/s for iperf3 test to consider successful (0 to not check speeds) (default: 8200)
   --name value, -n value                                                 name of the VM or HW to access
   --pings value                                                          number of pings to send between each pair of servers (0 to disable) (default: 5)
   --source value, --src value [ --source value, --src value ]            server to use as source for connectivity tests (default: all servers)
   --tos value                                                            TOS value to use for iperf3 tests (0 to disable TOS) (default: 0)
   --wait-switches-ready, --wait                                          wait for switches to be ready before testing connectivity (default: true)

   Global options:

   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu/hhfab") [$HHFAB_WORK_DIR]

Manual VPC creation

Creating and attaching VPCs

You can create and attach VPCs to the VMs using the kubectl fabric vpc command on the Control Node or outside of the cluster using the kubeconfig. For example, run the following commands to create 2 VPCs with a single subnet each, a DHCP server enabled with its optional IP address range start defined, and to attach them to some of the test servers:

core@control-1 ~ $ kubectl get conn | grep server
server-01--eslag--leaf-01--leaf-02   eslag       44h
server-02--eslag--leaf-01--leaf-02   eslag       44h
server-03--unbundled--leaf-01        unbundled   44h
server-04--bundled--leaf-02          bundled     44h
server-05--unbundled--leaf-03        unbundled   44h
server-06--bundled--leaf-03          bundled     44h

core@control-1 ~ $ kubectl fabric vpc create --name vpc-01 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10
13:46:58 INF VPC created name=vpc-01

core@control-1 ~ $ kubectl fabric vpc create --name vpc-02 --subnet 10.0.2.0/24 --vlan 1002 --dhcp --dhcp-start 10.0.2.10
13:47:14 INF VPC created name=vpc-02

core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-01/default --connection server-01--eslag--leaf-01--leaf-02
13:47:52 INF VPCAttachment created name=vpc-01--default--server-01--eslag--leaf-01--leaf-02

core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-02/default --connection server-02--eslag--leaf-01--leaf-02
13:48:07 INF VPCAttachment created name=vpc-02--default--server-02--eslag--leaf-01--leaf-02

The VPC subnet should belong to an IPv4Namespace, the default one in the VLAB is 10.0.0.0/16:

core@control-1 ~ $ kubectl get ipns
NAME      SUBNETS           AGE
default   ["10.0.0.0/16"]   44h

After you created the VPCs and VPCAttachments, you can check the status of the agents to make sure that the requested configuration was applied to the switches:

core@control-1 ~ $ kubectl get agents
NAME       ROLE          DESCR           APPLIED   APPLIEDG   CURRENTG   VERSION   REBOOTREQ
leaf-01    server-leaf   VS-01 ESLAG 1   36m       5          5          v0.96.2
leaf-02    server-leaf   VS-02 ESLAG 1   46m       5          5          v0.96.2
leaf-03    server-leaf   VS-03           21m       3          3          v0.96.2
spine-01   spine         VS-04           7m27s     1          1          v0.96.2
spine-02   spine         VS-05           37m       1          1          v0.96.2

In this example, the values in columns APPLIEDG and CURRENTG are equal which means that the requested configuration has been applied.

Setting up networking on test servers

You can use hhfab vlab ssh on the host to SSH into the test servers and configure networking there. For example, for both server-01 (ESLAG attached to both leaf-01 and leaf-02) we need to configure a bond with a VLAN on top of it and for the server-05 (single-homed unbundled attached to leaf-03) we need to configure just a VLAN and they both will get an IP address from the DHCP server. You can use the ip command to configure networking on the servers or use the little helper pre-installed by Fabricator on test servers, hhnet.

For server-01:

core@server-01 ~ $ hhnet cleanup
core@server-01 ~ $ hhnet bond 1001 layer2+3 enp2s1 enp2s2
10.0.1.10/24
core@server-01 ~ $ ip address show
...
3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether 3e:2e:1e:ef:e3:c8 brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01
4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether 3e:2e:1e:ef:e3:c8 brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3e:2e:1e:ef:e3:c8 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::3c2e:1eff:feef:e3c8/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
9: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3e:2e:1e:ef:e3:c8 brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001
       valid_lft 3580sec preferred_lft 3580sec
    inet6 fe80::3c2e:1eff:feef:e3c8/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever

And for server-02:

core@server-02 ~ $ hhnet cleanup
core@server-02 ~ $ hhnet bond 1002 layer2+3 enp2s1 enp2s2
10.0.2.10/24
core@server-02 ~ $ ip address show
...
3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether 6e:27:d4:e2:6b:f7 brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:03:01
4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether 6e:27:d4:e2:6b:f7 brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:03:02
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 6e:27:d4:e2:6b:f7 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::6c27:d4ff:fee2:6bf7/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
9: bond0.1002@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 6e:27:d4:e2:6b:f7 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002
       valid_lft 3594sec preferred_lft 3594sec
    inet6 fe80::6c27:d4ff:fee2:6bf7/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever

Note

While hhnet will apply any bond hash policy supported by ip, we recommend using either layer2+3 or layer2, which are fully 802.3ad compliant.

Testing connectivity before peering

You can test connectivity between the servers before peering the switches using the ping command:

core@server-01 ~ $ ping 10.0.2.10
PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
From 10.0.1.1 icmp_seq=1 Destination Net Unreachable
From 10.0.1.1 icmp_seq=2 Destination Net Unreachable
From 10.0.1.1 icmp_seq=3 Destination Net Unreachable
^C
--- 10.0.2.10 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms
core@server-02 ~ $ ping 10.0.1.10
PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
From 10.0.2.1 icmp_seq=1 Destination Net Unreachable
From 10.0.2.1 icmp_seq=2 Destination Net Unreachable
From 10.0.2.1 icmp_seq=3 Destination Net Unreachable
^C
--- 10.0.1.10 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms

Peering VPCs and testing connectivity

To enable connectivity between the VPCs, peer them using kubectl fabric vpc peer:

core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-01 --vpc vpc-02
23:43:21 INF VPCPeering created name=vpc-01--vpc-02

Make sure to wait until the peering is applied to the switches using kubectl get agents command. After waiting that columns APPLIEDG and CURRENTG are equal, you can test connectivity between the servers again:

core@server-01 ~ $ ping 10.0.2.10
PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms
64 bytes from 10.0.2.10: icmp_seq=2 ttl=62 time=7.60 ms
64 bytes from 10.0.2.10: icmp_seq=3 ttl=62 time=8.60 ms
^C
--- 10.0.2.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms
core@server-02 ~ $ ping 10.0.1.10
PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms
64 bytes from 10.0.1.10: icmp_seq=2 ttl=62 time=6.66 ms
64 bytes from 10.0.1.10: icmp_seq=3 ttl=62 time=4.49 ms
^C
--- 10.0.1.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms

If you delete the VPC peering with kubectl delete applied to the relevant object and wait for the agent to apply the configuration on the switches, you can observe that connectivity is lost again:

core@control-1 ~ $ kubectl delete vpcpeering/vpc-01--vpc-02
vpcpeering.vpc.githedgehog.com "vpc-01--vpc-02" deleted
core@server-01 ~ $ ping 10.0.2.10
PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
From 10.0.1.1 icmp_seq=1 Destination Net Unreachable
From 10.0.1.1 icmp_seq=2 Destination Net Unreachable
From 10.0.1.1 icmp_seq=3 Destination Net Unreachable
^C
--- 10.0.2.10 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms

You can see duplicate packets in the output of the ping command between some of the servers. This is expected behavior and is caused by the limitations in the VLAB environment.

core@server-01 ~ $ ping 10.0.5.10
PING 10.0.5.10 (10.0.5.10) 56(84) bytes of data.
64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms
64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms (DUP!)
64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms
64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms (DUP!)
64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.59 ms
64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.60 ms (DUP!)
^C
--- 10.0.5.10 ping statistics ---
3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms

Using VPCs with overlapping subnets

First, create a second IPv4Namespace with the same subnet as the default one:

core@control-1 ~ $ kubectl get ipns
NAME      SUBNETS           AGE
default   ["10.0.0.0/16"]   24m

core@control-1 ~ $ cat <<EOF > ipns-2.yaml
apiVersion: vpc.githedgehog.com/v1beta1
kind: IPv4Namespace
metadata:
  name: ipns-2
  namespace: default
spec:
  subnets:
  - 10.0.0.0/16
EOF

core@control-1 ~ $ kubectl apply -f ipns-2.yaml
ipv4namespace.vpc.githedgehog.com/ipns-2 created

core@control-1 ~ $ kubectl get ipns
NAME      SUBNETS           AGE
default   ["10.0.0.0/16"]   30m
ipns-2    ["10.0.0.0/16"]   8s

Let's assume that vpc-01 already exists and is attached to server-01 (see Creating and attaching VPCs). Now we can create vpc-03 with the same subnet as vpc-01 (but in the different IPv4Namespace) and attach it to the server-03:

core@control-1 ~ $ cat <<EOF > vpc-03.yaml
apiVersion: vpc.githedgehog.com/v1beta1
kind: VPC
metadata:
  name: vpc-03
  namespace: default
spec:
  ipv4Namespace: ipns-2
  subnets:
    default:
      dhcp:
        enable: true
        range:
          start: 10.0.1.10
      subnet: 10.0.1.0/24
      vlan: 2001
  vlanNamespace: default
EOF

core@control-1 ~ $ kubectl apply -f vpc-03.yaml

At that point you can setup networking on server-03 the same as you did for server-01 and server-02 in a previous section. Once you have configured networking, server-01 and server-03 have IP addresses from the same subnets.

Gateway peerings and NAT

Creating simple VPC peering via the gateway

When gateway is enabled in your VLAB topology, you also have the option of peering VPCs via the gateway. One way of doing so is using the hhfab helpers. For example, assuming vpc-01 and vpc-02 were previously created, you can run:

hhfab vlab setup-peerings 1+2:gw

Alternatively, you can create the peering manually in the control node, using the examples in the gateway section of the user guide as a base, e.g.:

core@control-1 ~ $ cat <<EOF > vpc-01--vpc-02--gw.yaml
apiVersion: gateway.githedgehog.com/v1alpha1
kind: Peering
metadata:
  name: vpc-01--vpc-02
  namespace: default
spec:
  peering:
    vpc-01:
      expose:
        - ips:
          - cidr: 10.0.1.0/24
    vpc-02:
      expose:
        - ips:
          - cidr: 10.0.2.0/24
EOF

core@control-1 ~ $ kubectl create -f vpc-01--vpc-02--gw.yaml
peering.gateway.githedgehog.com/vpc-01--vpc-02 created
core@control-1 ~ $

You should now be able to check connectivity using the same methods described previously, e.g.:

core@server-01 ~ $ ping 10.0.2.10
PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
64 bytes from 10.0.2.10: icmp_seq=1 ttl=61 time=70.0 ms
64 bytes from 10.0.2.10: icmp_seq=2 ttl=61 time=11.6 ms
64 bytes from 10.0.2.10: icmp_seq=3 ttl=61 time=14.0 ms
^C
--- 10.0.2.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 11.560/31.852/70.008/26.998 ms

VPC peering with Stateless NAT

Let's either edit or recreate the previous peering so that it looks like this:

vpc-01--vpc-02--gw.yaml
apiVersion: gateway.githedgehog.com/v1alpha1
kind: Peering
metadata:
  name: vpc-01--vpc-02
  namespace: default
spec:
  peering:
    vpc-01:
      expose:
        - ips:
          - cidr: 10.0.1.0/24
    vpc-02:
      expose:
        - ips:
          - cidr: 10.0.2.0/24
          as:
          - cidr: 192.168.2.0/24

If we try pinging server-2 from server-1 using its real IP address, we should now observe a failure:

core@server-01 ~ $ ping 10.0.2.10 -c 3 -W 1
PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
From 10.0.1.1 icmp_seq=1 Destination Net Unreachable
From 10.0.1.1 icmp_seq=2 Destination Net Unreachable
From 10.0.1.1 icmp_seq=3 Destination Net Unreachable

--- 10.0.2.10 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2006ms

However, using the translated IP:

core@server-01 ~ $ ping 192.168.2.10 -c 3 -W 1
PING 192.168.2.10 (192.168.2.10) 56(84) bytes of data.
64 bytes from 192.168.2.10: icmp_seq=1 ttl=61 time=13.5 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=61 time=13.1 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=61 time=12.7 ms

--- 192.168.2.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 12.716/13.099/13.460/0.304 ms

And in the other direction this is completely transparent:

core@server-02 ~ $ ping 10.0.1.10 -c 3 -W 1
PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
64 bytes from 10.0.1.10: icmp_seq=1 ttl=61 time=29.8 ms
64 bytes from 10.0.1.10: icmp_seq=2 ttl=61 time=12.2 ms
64 bytes from 10.0.1.10: icmp_seq=3 ttl=61 time=12.3 ms

--- 10.0.1.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 12.177/18.106/29.826/8.287 ms

VPC peering with Stateful NAT

Let's change the peering again to use source stateful NAT:

vpc-01--vpc-02--gw.yaml
apiVersion: gateway.githedgehog.com/v1alpha1
kind: Peering
metadata:
  name: vpc-01--vpc-02
  namespace: default
spec:
  peering:
    vpc-01:
      expose:
        - ips:
          - cidr: 10.0.1.0/24
    vpc-02:
      expose:
        - ips:
          - cidr: 10.0.2.0/24
          as:
          - cidr: 192.168.5.40/31
          nat:
            stateful:
              idleTimeout: 1m

This time we are not able to ping the server with either its real or translated IPs; flows are allowed in reverse direction only if they were initiated by the source, up until the idle timeout expires.

core@server-01 ~ $ ping 10.0.2.10 -c 3 -W 1
PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
From 10.0.1.1 icmp_seq=1 Destination Net Unreachable
From 10.0.1.1 icmp_seq=2 Destination Net Unreachable
From 10.0.1.1 icmp_seq=3 Destination Net Unreachable

--- 10.0.2.10 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms

core@server-01 ~ $ ping 192.168.5.40 -c 3 -W 1
PING 192.168.5.40 (192.168.5.40) 56(84) bytes of data.

--- 192.168.5.40 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2078ms

If we ping from the server attached to the VPC with source NAT, however, things just work:

core@server-02 ~ $ ping 10.0.1.10 -c 3 -W 1
PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
64 bytes from 10.0.1.10: icmp_seq=1 ttl=61 time=16.3 ms
64 bytes from 10.0.1.10: icmp_seq=2 ttl=61 time=12.2 ms
64 bytes from 10.0.1.10: icmp_seq=3 ttl=61 time=11.6 ms

--- 10.0.1.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 11.569/13.358/16.297/2.094 ms

Running tcpdump on the destination server confirms that pings appear to come from the translated IP address:

core@server-01 ~ $ sudo tcpdump -ni bond0.1001 icmp
dropped privs to pcap
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on bond0.1001, link-type EN10MB (Ethernet), snapshot length 262144 bytes
10:06:53.855140 IP 192.168.5.40 > 10.0.1.10: ICMP echo request, id 7170, seq 1, length 64
10:06:53.855205 IP 10.0.1.10 > 192.168.5.40: ICMP echo reply, id 7170, seq 1, length 64
10:06:54.860640 IP 192.168.5.40 > 10.0.1.10: ICMP echo request, id 7170, seq 2, length 64
10:06:54.860704 IP 10.0.1.10 > 192.168.5.40: ICMP echo reply, id 7170, seq 2, length 64
10:06:55.859745 IP 192.168.5.40 > 10.0.1.10: ICMP echo request, id 7170, seq 3, length 64
10:06:55.859801 IP 10.0.1.10 > 192.168.5.40: ICMP echo reply, id 7170, seq 3, length 64