Attach Multiple Interfaces to Pods

Multus CNI provides the capability to attach multiple network interfaces to Pods in Kubernetes Clusters

Introduction

XtremeCloud Data Grid-web has an appreciable amount of replication traffic flowing between the Cloud Service Providers (CSP). As a result, longer term, a more efficient approach is needed to avoid end-user traffic and replication traffic sharing the same subnet (default) in the Kubernetes Cluster. This is true whether the traffic is entering (ingress) the Kubernetes cluster or leaving it (egress).

To optimize the performance of XtremeCloud Data Grid-web, an effort is underway for Eupraxia Labs to work with the CSPs to use public cloud CNI capabilities to allow for the XtremeCloud application pods to be deployed, through annotations (Node Feature Discovery (NFD)), with multiple network interfaces.

Until Cloud Service Providers (CSP) routinely offer Node Feature Discovery (NFD) within their Kubernetes Cluster Engines (e.g. GCP/GKE), the cost associated with this type of implementation will be prohibitive except for large enterprise users.

Multiple Interfaces for XtremeCloud Application Pods

Multus CNI provides the capability to attach multiple network interfaces to pods in Kubernetes Clusters and the OpenShift Container Platform 4.1.

Multus CNI is a container network interface (CNI) plugin for Kubernetes that enables attaching multiple network interfaces to pods. Typically, in Kubernetes each pod only has one network interface (apart from a loopback) – with Multus you can create a multi-homed pod that has multiple interfaces. This is accomplished by Multus acting as a “meta-plugin”, a CNI plugin that can call multiple other CNI plugins.

Multus CNI follows the Kubernetes Network Custom Resource Definition Defacto Standard to provide a standardized method by which to specify the configurations for additional network interfaces. This standard is put forward by the Kubernetes Network Plumbing Working Group.

This gives you flexibility when you must configure pods that deliver network functionality, such as switching or routing. Multus CNI is useful in situations where network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for the following performance and security reasons:

  1. Performance. You can send traffic along two different planes in order to manage how much traffic is along each plane.

  2. Security. You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers.

All of the pods in the cluster will still use the cluster-wide default network to maintain connectivity across the cluster. Every pod has an eth0 interface which is attached to the cluster-wide pod network. You can view the interfaces for any pod using the technique we introduced in Inspect Kubernetes Networking.

If you add additional network interfaces using Multus CNI, they will be named net1, net2, …, netN.

Let’s take at look at the following diagram:

Multiple Interfaces in a Kubernetes Pod - click image to enlarge

The top half of the diagram reflects the change from a single interface for the pod (eth0) in the default subnet to multiple interfaces being available to the pod. The bottom half provides more detail. The illustration shows the pod with three interfaces: eth0, net0 and net1. eth0 connects the Kubernetes Cluster network to connect with Kubernetes Server/Services (e.g., Kubernetes api-server, kubelet and so on). net0 and net1 are additional network interfaces and connect to other networks by using other CNI plugins (e.g., vlan/vxlan/ptp).

DPDK is the Data Plane Development Kit that consists of libraries to accelerate packet processing workloads running on a wide variety of CPU architectures.

SR-IOV provides native access to an actual PCIe-based networking device without any overheads of virtual devices. Every container can get dedicated NIC Tx and Rx queues to send receive application data without any contention to other containers.

Planned

XtremeCloud Data Grid-web pods and XtremeCloud Single Sign-On (SSO) pods will be configured to run with multiple network interfaces. The XtremeCloud SSO pods will be configured for north-south traffic to and from the XtremeCloud Data Grid-ldap virtual machines (VM) and the XtremeCloud Data Grid-db hosts (raw iron). The VM and hosts IP addresses will share the same subnet as the pods data plane interfaces. Also, the XtremeCloud SSO pods will be configured for north-south traffic to the XtremeCloud Data Grid-web pods when there are updates to the cross-cloud replicated traffic. This routinely occurs when the XtremeCloud SSO cluster of containers has cache updates or invalidations as a result of an application state change.