Inspect Kubernetes Networking

Understanding the Networking Interfaces in Kubernetes

On the controller workstation, get the containerID for an XtremeCloud container:

kubectl get po sso-dev-xtremecloud-sso-gcp-0 -o jsonpath='{.status.containerStatuses[0].containerID}'

Get the node that the container is running on:

[centos@vm-controller ~]$ kubectl get pods -o wide


NAME                                      READY   STATUS    RESTARTS   AGE     IP            NODE                                                  NOMINATED NODE
datagrid-dev-xtremecloud-datagrid-gcp-0   1/1     Running   0          2d23h   10.52.1.209   gke-xtremecloud-cluster--default-pool-d03f8f59-w8d2   <none>
logdna-agent-bk7qn                        1/1     Running   0          10d     10.52.2.13    gke-xtremecloud-cluster--default-pool-d03f8f59-97k4   <none>
logdna-agent-cwckn                        1/1     Running   0          10d     10.52.0.9     gke-xtremecloud-cluster--default-pool-d03f8f59-pzkc   <none>
logdna-agent-wwcgt                        1/1     Running   0          10d     10.52.1.204   gke-xtremecloud-cluster--default-pool-d03f8f59-w8d2   <none>
sso-dev-xtremecloud-sso-gcp-0             1/1     Running   0          2d21h   10.52.1.210   gke-xtremecloud-cluster--default-pool-d03f8f59-w8d2   <none>

Connect to that node and run the following (easy to do on the Google Cloud Console):

Use Google Console to Access Kubernetes Worker Node - click image to enlarge

Using the Google Console Shell - click image to enlarge

Let’s get the Docker containers that are running. The very first result is the XtremeCloud SSO container.

davidjbrewer@gke-xtremecloud-cluster--default-pool-d03f8f59-w8d2 ~ $ sudo docker ps
CONTAINER ID        IMAGE                                                                                                                         COMMAND                  CREATED 
            STATUS              PORTS               NAMES
07e6e6d4b8d3        quay.io/eupraxialabs/xtremecloud-sso@sha256:7c7761baedee9721e0f1039bbdb66f7b6edc86b531f8e529dda5f2456f601ee5                  "/keycloak+pmcd.sh..."   3 days a
go          Up 3 days                               k8s_xtremecloud-sso-gcp_sso-dev-xtremecloud-sso-gcp-0_dev_48242865-d1c8-11e9-83e8-42010a80021c_0
2ea4b37cb1c9        k8s.gcr.io/pause:3.1                                                                                                          "/pause"                 3 days a
go          Up 3 days                               k8s_POD_sso-dev-xtremecloud-sso-gcp-0_dev_48242865-d1c8-11e9-83e8-42010a80021c_0
afb1ba25aa37        quay.io/eupraxialabs/xtremecloud-datagrid@sha256:99d24037178f3169815dd3d7f6c9aec6c7ea1a23e8c89d164770fe624546bb40             "docker-entrypoint..."   3 days a
go          Up 3 days                               k8s_xtremecloud-datagrid-gcp_datagrid-dev-xtremecloud-datagrid-gcp-0_dev_3b2d73b5-d1b6-11e9-83e8-42010a80021c_0
2547e503de8a        k8s.gcr.io/pause:3.1                                                                                                          "/pause"                 3 days a
go          Up 3 days                               k8s_POD_datagrid-dev-xtremecloud-datagrid-gcp-0_dev_3b2d73b5-d1b6-11e9-83e8-42010a80021c_0
5bf82b6eb295        55eb2da2c63a                                                                                                                  "/usr/bin/acmesolv..."   5 days a
go          Up 5 days                               k8s_acmesolver_cm-acme-http-solver-9dv4f_xcsso_83781d07-d0a0-11e9-83e8-42010a80021c_0
95438d189a4a        k8s.gcr.io/pause:3.1                                                                                                          "/pause"                 5 days a
go          Up 5 days                               k8s_POD_cm-acme-http-solver-9dv4f_xcsso_83781d07-d0a0-11e9-83e8-42010a80021c_0
3758aaa72d2b        logdna/logdna-agent@sha256:fa5d33a8bc224d97fcefaa0a76c4f1b447ab2ebfad5e451414f5e7f260bce6fe                                   "/usr/bin/logdna-a..."   11 days 
ago         Up 11 days                              k8s_logdna-agent_logdna-agent-wwcgt_dev_fb48d049-cbfb-11e9-83e8-42010a80021c_0
4605541e8a68        k8s.gcr.io/pause:3.1                                                                                                          "/pause"                 11 days 
ago         Up 11 days                              k8s_POD_logdna-agent-wwcgt_dev_fb48d049-cbfb-11e9-83e8-42010a80021c_0
fa262db33d18        42e4387da83f                                                                                                                  "/monitor --stackd..."   2 months
 ago        Up 2 months                             k8s_prometheus-to-sd-exporter_fluentd-gcp-v3.2.0-dwlxz_kube-system_942db72e-a0f5-11e9-83e8-42010a80021c_0
8fcfc61f4b4b        gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:f8d5231b67b9c53f60068b535a11811d29d1b3efd53d2b79f2a2591ea338e4f2   "/entrypoint.sh /u..."   2 months
 ago        Up 2 months                             k8s_fluentd-gcp_fluentd-gcp-v3.2.0-dwlxz_kube-system_942db72e-a0f5-11e9-83e8-42010a80021c_0
a7ac63114aac        42e4387da83f                                                                                                                  "/monitor --source..."   2 months
 ago        Up 2 months                             k8s_prometheus-to-sd_prometheus-to-sd-6rb6j_kube-system_9b8ddf78-a0e5-11e9-8502-42010a800189_0
09ecdfcc80c3        a3053d702f7c                                                                                                                  "/bin/sh -c 'exec ..."   2 months
 ago        Up 2 months                             k8s_kube-proxy_kube-proxy-gke-xtremecloud-cluster--default-pool-d03f8f59-w8d2_kube-system_cd043d589683ca0c4dc91e4145331adb_0
20215da34e64        k8s.gcr.io/pause:3.1                                                                                                          "/pause"                 2 months
 ago        Up 2 months                             k8s_POD_prometheus-to-sd-6rb6j_kube-system_9b8ddf78-a0e5-11e9-8502-42010a800189_0
084c2034429d        k8s.gcr.io/pause:3.1                                                                                                          "/pause"                 2 months
 ago        Up 2 months                             k8s_POD_kube-proxy-gke-xtremecloud-cluster--default-pool-d03f8f59-w8d2_kube-system_cd043d589683ca0c4dc91e4145331adb_0
7270519e2c3d        k8s.gcr.io/pause:3.1                                                                                                          "/pause"                 2 months
 ago        Up 2 months                             k8s_POD_fluentd-gcp-v3.2.0-dwlxz_kube-system_942db72e-a0f5-11e9-83e8-42010a80021c_0
davidjbrewer@gke-xtremecloud-cluster--default-pool-d03f8f59-w8d2 ~ $ 

Let’s get the process ID (PID) of the container and assign to an environment variable:

PID=$(docker inspect --format ''07e6e6d4b8d3d52dff4b7b73324d706fe3425e8825e8b84dab526f01fa89fb5f')

A process ID (or PID) will be output. Now we can use the nsenter program to run a command in that process’s network namespace: Run nsenter against the PID:

Nsenter is a utility that enters the namespaces of one or more other processes and then executes the specified program. In other words, we jump to the inner side of the namespace.

davidjbrewer@gke-xtremecloud-cluster--default-pool-d03f8f59-w8d2 ~ $ sudo nsenter -t ${PID} -n ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if213: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default 
    link/ether 6a:6b:ec:a2:e3:07 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.52.1.210/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::686b:ecff:fea2:e307/64 scope link 

Note that eth0 has @if213 after it. This means this pod’s eth0 is linked to the node’s interface labeled as 213. It is no surprise that eth0 has the same IP address that was shown in the earlier ‘kubectl’ command to get the Kubernetes node that the container is running on.

Let’s look at the network interfaces on the Kubernetes worker node:

davidjbrewer@gke-xtremecloud-cluster--default-pool-d03f8f59-w8d2 ~ $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 42:01:0a:80:00:1b brd ff:ff:ff:ff:ff:ff
    inet 10.128.0.27/32 brd 10.128.0.27 scope global dynamic eth0
       valid_lft 84269sec preferred_lft 84269sec
    inet6 fe80::4001:aff:fe80:1b/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:a0:85:be:ee brd ff:ff:ff:ff:ff:ff
    inet 169.254.123.1/24 scope global docker0
       valid_lft forever preferred_lft forever
4: cbr0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1460 qdisc htb state UP group default qlen 1000
    link/ether 36:3e:15:98:0b:2c brd ff:ff:ff:ff:ff:ff
    inet 10.52.1.1/24 scope global cbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::343e:15ff:fe98:b2c/64 scope link 
       valid_lft forever preferred_lft forever
207: veth7f2ffde5@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue master cbr0 state UP group default 
    link/ether 82:03:b5:60:58:42 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::8003:b5ff:fe60:5842/64 scope link 
       valid_lft forever preferred_lft forever
209: vethd68c77eb@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue master cbr0 state UP group default 
    link/ether 42:6f:d0:6a:e2:fb brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::406f:d0ff:fe6a:e2fb/64 scope link 
       valid_lft forever preferred_lft forever
212: veth1458a4b4@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue master cbr0 state UP group default 
    link/ether a6:94:40:da:5d:a4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::a494:40ff:feda:5da4/64 scope link 
       valid_lft forever preferred_lft forever
213: vethd0de12bd@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue master cbr0 state UP group default 
    link/ether ea:39:ed:aa:2b:e1 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::e839:edff:feaa:2be1/64 scope link 
       valid_lft forever preferred_lft forever

The interface labeled 213 is vethd0de12bd in this example output. This is the virtual ethernet pipe to our XtremeCloud SSO pod of interest.

While we’re here, let’s look at the routing table for that Kubernetes worker node:

davidjbrewer@gke-xtremecloud-cluster--default-pool-d03f8f59-w8d2 ~ $ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.128.0.1      0.0.0.0         UG    1024   0        0 eth0
0.0.0.0         10.128.0.1      0.0.0.0         UG    1024   0        0 eth0
10.52.1.0       0.0.0.0         255.255.255.0   U     0      0        0 cbr0
10.128.0.1      0.0.0.0         255.255.255.255 UH    1024   0        0 eth0
10.128.0.1      0.0.0.0         255.255.255.255 UH    1024   0        0 eth0
169.254.123.0   0.0.0.0         255.255.255.0   U     0      0        0 docker0

As can be seen from the routing table, the container with the IP address of 10.52.1.210 can get to other pods in the CIDR range of 10.52.1.0/24 via the interface ‘cbr0’.

Pods Communicate on the same node via the Linux Bridge - cbr0

Here are the bridge interfaces on the worker node:

davidjbrewer@gke-xtremecloud-cluster--default-pool-d03f8f59-w8d2 ~ $ sudo brctl show
bridge name     bridge id               STP enabled     interfaces
cbr0            8000.363e15980b2c       no              veth1458a4b4
                                                        veth7f2ffde5
                                                        vethd0de12bd
                                                        vethd68c77eb
docker0         8000.0242a085beee       no

You can see that the XtremeCloud SSO virtual ethernet interface of vethd0de12bd is in the bridge.

On final thought on Kubernetes Networking. If an XtremeCloud SSO container is trying to access an *XtremeCloud Data Grid container on another worker node, for example Pod1 to Pod4, it would take this route:

Pods Communicate Across Worker Nodes via an entry in the Routing Table

This would occur when an arp request to local (same node) veth interfaces, does not get a response. The request is then sent out via eth0 of the worker node. That interface is, in our case, the worker node IP address of 10.128.0.27 in an effort to get a response to the arp request.

davidjbrewer@gke-xtremecloud-cluster--default-pool-d03f8f59-w8d2 ~ $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 42:01:0a:80:00:1b brd ff:ff:ff:ff:ff:ff
    inet 10.128.0.27/32 brd 10.128.0.27 scope global dynamic eth0
       valid_lft 50805sec preferred_lft 50805sec
    inet6 fe80::4001:aff:fe80:1b/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:a0:85:be:ee brd ff:ff:ff:ff:ff:ff
    inet 169.254.123.1/24 scope global docker0
       valid_lft forever preferred_lft forever
4: cbr0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1460 qdisc htb state UP group default qlen 1000
    link/ether 36:3e:15:98:0b:2c brd ff:ff:ff:ff:ff:ff
    inet 10.52.1.1/24 scope global cbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::343e:15ff:fe98:b2c/64 scope link 
       valid_lft forever preferred_lft forever
207: veth7f2ffde5@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue master cbr0 state UP group default 
    link/ether 82:03:b5:60:58:42 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::8003:b5ff:fe60:5842/64 scope link 
       valid_lft forever preferred_lft forever
209: vethd68c77eb@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue master cbr0 state UP group default 
    link/ether 42:6f:d0:6a:e2:fb brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::406f:d0ff:fe6a:e2fb/64 scope link 
       valid_lft forever preferred_lft forever
212: veth1458a4b4@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue master cbr0 state UP group default 
    link/ether a6:94:40:da:5d:a4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::a494:40ff:feda:5da4/64 scope link 
       valid_lft forever preferred_lft forever
213: vethd0de12bd@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue master cbr0 state UP group default 
    link/ether ea:39:ed:aa:2b:e1 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::e839:edff:feaa:2be1/64 scope link 
       valid_lft forever preferred_lft forever

However, in our case, those two (2) containers happen to be on the same worker node. We saw this when we ran the earlier command:

[centos@vm-controller ~]$ kubectl get pods -o wide