Install Aspen Mesh

An easy-to-use distribution of Istio with added enterprise features

Aspen Mesh is a service mesh for Kubernetes. It’s built on the Istio project, and includes a self-hosted control plane and dashboard along with a hosted service for documentation. This section instructs you on how to set up the Aspen Mesh control plane and dashboard.

Quick Start Guide

To see what the mesh can do for you, you’ll need to install Aspen Mesh in your Kubernetes cluster, set up an application, and run traffic through that application. This guide will walk you through this process.


  • Citadel: The Citadel creates and rotates certs and keys for securing Pod-to-Pod communication.

  • Aspen Mesh Controlplane: The Aspen Mesh controlplane runs in your Kubernetes cluster and evaluates your metrics and cluster configuration to provide insights about the mesh.

  • Aspen Mesh Dashboard: The Aspen Mesh dashboard is the UI for the Aspen Mesh controlplane. You can view real-time and historical metrics and configuration for your mesh and generate configuration to change the behavior of the mesh.

  • Mixer-policy: Istio Mixer is part of the control plane for the service mesh that runs in your cluster. When sidecars need to evaluate policy, they consult the mixer.

  • Mixer Telemetry: Istio Mixer Telemetry collects telemetry from sidecars and provides to Prometheus.

  • Pilot: Istio Pilot is part of the control plane for the service mesh that runs in your cluster. Pilot provides an abstraction for the container orchestration environment, in this case Kubernetes. It sends routing config to all the sidecars.

  • Sidecar: The sidecar is a proxy that runs in the same pod as user applications. It is the datapath of the service mesh and handles all traffic between pods in the mesh. Aspen Mesh uses Istio sidecar based on the Envoy proxy.

  • Sidecar Injector: The Sidecar injector Injects dataplane sidecars into CNF Pods.

  • Traffic Claim Enforcer: The Traffic Claim Enforcer prevents configurations of global objects without permission.


Kubernetes Requirements

Mesh Permissions

Aspen Mesh Service Accounts, Roles, and Role Bindings can be found in the following directories in the Aspen Mesh release after the tarball is unpacked.

  • /install/kubernetes/istio/templates/
  • /install/kubernetes/istio-init/templates/

Download Aspen Mesh Releases

Prometheus Requirements

  • Supported Versions: 2.0 - 2.11
  • Prometheus must be configured to auto-discover and scrape new pods in the Kubernetes cluster.

Browser Requirements

  • Supported Browsers:
    • Chrome (Version 73.0+)

Helm and Tiller

You will also need Helm and Tiller to install Aspen Mesh. This will allow Helm and Tiller to manage the lifecycle of Aspen Mesh.

These installation steps require that Tiller has permissions to create new namespaces, cluster roles and cluster role bindings, and create new objects like deployments in the kube-system namespace. See Helm’s Securing Installation docs for details. For production environments, we recommend that you follow the Best Practices for Installing Helm and Tiller.

In some non-production environments, it’s appropriate to give Tiller a cluster-admin service account via the service account found in the expanded Aspen Mesh tarball: kubectl apply -f install/kubernetes/helm/helm-service-account.yaml and install helm with: helm init --service-account=tiller

Optional Add-ons and Recommendations

If you have Jaeger installed, you have the option of connecting it to Aspen Mesh. The instructons for this are located in the Customizing the Mesh section of the installation steps below.

If you have Grafana installed, you’ll see instructions for configuring some example Grafana dashboards, which can be done post-install.

We recommend that you use automatic sidecar injection - this means that your existing application deployment configs or tools do not have to be updated to use the mesh. These instructions will help you turn on automatic sidecar injection.

We recommend that you configure Aspen Mesh to use mutual TLS authentication (mTLS). With mTLS enabled, there are some restrictions on your workloads such as liveness and readiness probes.

Your cluster needs to be able to run containers from container registries on the Internet. The instructions assume that your cluster can connect out to the Internet directly. You can re-host the container images in your internal registry if you’d like.

Installation Steps

  1. Ensure that admission webhooks are enabled in your Kubernetes cluster.

    The method will depend on the way you set up your Kubernetes cluster. Ensure that MutatingAdmissionWebhook and ValidatingAdmissionWebhook admission controllers are enabled in the apiserver. They are usually on by default.

    Also ensure that the API is enabled by running the command:

    $ kubectl api-versions | grep admissionregistration

    the output should include:

    More details are available here.

  2. Download the Aspen Mesh release for your system that you’ll be installing from.

    The only difference is the istioctl binary, so download the Windows version if you are going to configure the mesh from a Windows system (even though your Kubernetes cluster that is running the mesh is linux-based).

  3. Extract the release and change into the directory.

  4. Add the istioctl client to your PATH:
    $ export PATH=$PWD/bin:$PATH
  5. Ensure Helm and Tiller have sufficient permissions to install Aspen Mesh. See the Helm and Tiller section of the Prerequisites for details.

  6. Use helm to install the istio-init chart. This will install all of the required Istio Custom Resource Definitions. apiserver.
    $ helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system
    1. Verify that all 23 Istio CRDs were committed to the Kubernetes apiserver:
      $ kubectl get crds | grep '\|\|' | wc -l
  7. Follow the steps in Customizing the Mesh to modify the aspenmesh-values file for the istio helm chart.

  8. Use helm to install the Istio chart, which includes the Aspen Mesh controlplane, Aspen Mesh dashboard and Istio components. Note that a successful install of the istio chart requires that all prerequisites are met, and that the modified values-aspenmesh.yaml values file is used.
    $ helm install install/kubernetes/helm/istio --name istio --namespace istio-system \
      --values install/kubernetes/helm/istio/values-aspenmesh.yaml
    1. Verify that all 24 Istio CRDs and the Aspen Mesh CRDs were committed to the Kubernetes apiserver:
      $ kubectl get crds | grep '\|\|' | wc -l


  1. Verify that all 24 Istio and Aspen Mesh CRDs were committed to the Kubernetes apiserver:
    $ kubectl get crds | grep '\|\|' | wc -l
  2. Check that the Aspen Mesh Istio services and deployments are running:
      $ kubectl -n istio-system get svc,deployment
  NAME                                    TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                                      AGE
  service/aspen-mesh-controlplane         ClusterIP   <none>           19001/TCP,19000/TCP,9105/TCP                 6m
  service/aspen-mesh-dashboard            ClusterIP    <none>           80/TCP                                       6m
  service/istio-citadel                   ClusterIP     <none>           8060/TCP,15014/TCP                           6m
  service/istio-galley                    ClusterIP      <none>           443/TCP,15014/TCP,9901/TCP                   6m
  service/istio-ingressgateway            LoadBalancer          15020:30937/TCP,80:31380/TCP, ...            6m
  service/istio-pilot                     ClusterIP   <none>           15010/TCP,15011/TCP,8080/TCP,15014/TCP       6m
  service/istio-policy                    ClusterIP     <none>           9091/TCP,15004/TCP,15014/TCP                 6m
  service/istio-sidecar-injector          ClusterIP    <none>           443/TCP                                      6m
  service/istio-telemetry                 ClusterIP    <none>           9091/TCP,15004/TCP,9093/TCP,42422/TCP        6m
  service/traffic-claim-enforcer-webhook  ClusterIP    <none>           443/TCP                                      6m 

  NAME                                             READY   UP-TO-DATE   AVAILABLE   AGE
  deployment.extensions/aspen-mesh-controlplane    2/2     2            2           6m
  deployment.extensions/aspen-mesh-dashboard       2/2     2            2           6m
  deployment.extensions/istio-citadel              1/1     1            1           6m
  deployment.extensions/istio-galley               1/1     1            1           6m
  deployment.extensions/istio-ingressgateway       1/1     1            1           6m
  deployment.extensions/istio-pilot                1/1     1            1           6m
  deployment.extensions/istio-policy               1/1     1            1           6m
  deployment.extensions/istio-sidecar-injector     1/1     1            1           6m
  deployment.extensions/istio-telemetry            1/1     1            1           6m
  deployment.extensions/traffic-claim-enforcer     1/1     1            1           6m

Customizing the Mesh

The Aspen Mesh installation requires a few customizations, all of which can be configured by modifying the Aspen Mesh values file for the istio chart (install/kubernetes/helm/istio/values-aspenmesh.yaml):

  1. Modify the following aspen-mesh-controlplane values in the Aspen Mesh values file:
    • The userAuth type can be set to jwt or none.

    If userAuth.type is set to jwt, the following parameters should be used for configuration:

    • userAuth.jwt.jwks (Required) - The URL for retrieving the Json Web Keys (JWKs) used for validating Json Web Tokens (JWTs).
    • userAuth.jwt.redirectUrl (Required) - The URL that users should be redirected to if they’re not logged in.
    • (Optional) - A comma delimited list of claims used for validating JWTs. Example “,role=k8s.admin”. Defaults to “”.

    If userAuth.type is set to none, the userAuth.jwt.* fields should be commented out or removed.

    • The prometheusUrl must be set to the URL of your Prometheus service. The URL typically follows the format:


    • The clusterId is set to ‘demo’, but can optionally be set to a name of your choosing.

  2. In the same Aspen Mesh values file, modify the following global values (optional):
    • The tracer.zipkin.address field can be set to the address of the cluster’s Jaeger tracing service. This allows traces from Aspen Mesh to be sent to Jaeger.

Managing sidecar injection

Sidecar injection is managed on a per-namespace basis. The default is to not inject.

To enable or disable injections on a namespace, set the istio-injection label on the namespace to enabled or disabled. When there is no label, the default is disabled.

The change does not affect existing pods, so they will need to be recreated for the change to take affect. If the pods are managed by something like a deployment, they can just be deleted.

For example, to enable sidecar injection in the default namespace:

$ kubectl label --overwrite namespace default istio-injection=enabled

To see what namespaces have injection enabled, run the command:

$ kubectl get namespace -L istio-injection
NAME                               STATUS    AGE       ISTIO-INJECTION
default                            Active    1d        enabled
istio-system                       Active    10m       disabled
kube-public                        Active    1d
kube-system                        Active    1d

Grafana Dashboards

Aspen Mesh provides Grafana dashboards, which can be found in the release tarball using the following path: install/kubernetes/helm/istio/charts/grafana/dashboards.

Your Grafana service will need to have access to the same prometheus service that is associated with Aspen Mesh for the dashboards to receive data.

Application Requirements

To be part of the service mesh, pods and services must satisfy the following requirements:

  • Service ports must be named <protocol>[-<suffix>]
Protocol Port Name Port Name w/ Suffix
HTTP http http-<suffix>
HTTP2 http2 http2-<suffix>
HTTPS https https-<suffix>
TLS tls tls-<suffix>
GRPC grpc grpc-<suffix>
TCP tcp tcp-<suffix>
UDP udp udp-<suffix>
Mongo mongo mongo-<suffix>
MySQL mysql mysql-<suffix>
Redis redis redis-<suffix>


  apiVersion: v1
  kind: Service
    name: example-api
    namespace: default
      app: example-api
    - port: 1080
      name: http-api
      targetPort: http-api
    - port: 1090
      name: grpc-api
      targetPort: grpc-api
      app: example-api
  • Pods must include an explicit list of ports each container listens on.


    apiVersion: apps/v1beta1
    kind: Deployment
      name: example-api
      namespace: default
            app: example-api
          - name: example-api
              - containerPort: 1080
                name: grpc-api
              - containerPort: 1090
                name: http-api

    WARNING! - Any unlisted ports will bypass the proxy and all associated mesh policies.

  • Applications must not run as a user with a UID of 1337.
  • PODs must belong to at least one Kubernetes service, regardless if it exposes a port or not.
  • PODs belonging to more than one service, must not use the same port number for different protocols.
  • PODs must allow the NET_ADMIN capability, if pod security policies are enforced in your cluster.


    apiVersion: apps/v1beta1
    kind: Deployment
      name: example-api
      namespace: default
            app: example-api
          - name: example-api
                add: ["NET_ADMIN"]

2 - TCP headless services must not use any of the following restricted ports utilized by by Istio.

Port Protocol Used by Description
8060 HTTP Citadel GRPC server
9091 HTTP Mixer Policy/Telemetry
9093 HTTP Citadel  
15000 TCP Envoy Envoy admin port (commands/diagnostics)
15001 TCP Envoy Envoy
15004 HTTP Mixer, Pilot Policy/Telemetry - mTLS
15010 HTTP Pilot Pilot service - XDS pilot - discovery
15011 TCP Pilot Pilot service - mTLS - Proxy - discovery
15014 HTTP Citadel, Mixer, Pilot Control plane monitoring
15090 HTTP Mixer Proxy
42422 TCP Mixer Telemetry - Prometheus

3 - To utilize tracing, applications must forward the following headers from incoming requests to outgoing requests:

  • x-request-id
  • x-b3-traceid
  • x-b3-spanid
  • x-b3-parentspanid
  • x-b3-sampled
  • x-b3-flags


Accessing the Dashboard

The Aspen Mesh dashboard is accessible from the aspen-mesh-controlplane service in the istio-system namespace in your cluster, on port 19001. For example, a URL for accessing the dashboard in-cluster would be: http://aspen-mesh-controlplane.istio-system.svc.cluster.local:19001. To access the dashboard as a user outside the cluster, you have these options:

  1. Port forward: For initial testing or lab environments, it may be suitable to just port forward and access it from your desktop client.

     kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=aspen-mesh-controlplane -o jsonpath='{.items[0]}') 19001
     open http://localhost:19001/  # or point your browser at http://localhost:19001/
  2. Platform-specific: Your cloud or Kubernetes platform may already have a platform-specific technique for exposing services. For instance, you may expose services by defining Ingress resources. Consult your platform-specific documentation for this. If your platform will expose this on an untrusted network, please ensure your platform provides an authentication proxy.

    You will need to expose the aspen-mesh-controlplane service in the istio-system namespace, port 19001. For example, an Ingress specification may look like:

     kubectl apply -f - <<EOF
     kind: Ingress
       name: aspen-mesh-ingress
       namespace: istio-system
         # Platform-specific annotations
         # We recommend enabling TLS
       - host:
           - backend:
               serviceName: aspen-mesh-controlplane
               servicePort: 19001
  3. Service type:LoadBalancer: Your cloud or Kubernetes platform may create an external load balancer for you if you declare a service with Type: LoadBalancer. If your platform will expose this load balancer on an untrusted network, please ensure your platform provides an authentication proxy.

    You will need to create a service with type: LoadBalancer like this:

     kubectl apply -f - <<EOF
     apiVersion: v1
     kind: Service
       name: aspen-mesh-controlplane-external
       namespace: istio-system
         # Platform-specific annotations
         # We recommend enabling TLS
         app: aspen-mesh-controlplane
       - name: http
         port: 19001
         protocol: TCP
         targetPort: http
       - name: grpc
         port: 19000
         protocol: TCP
         targetPort: grpc
         app: aspen-mesh-controlplane
       type: LoadBalancer
  4. Sign In with Github (OAuth2 Proxy with TLS): If you do not already have an authentication system Eupraxia Labs can provide this with our XtremeCloud Single Sign-On (SSO) product.

(Optional) Deploying a sample application inside the service mesh

Only perform these steps if you wish to demo Aspen Mesh. The Kubernetes cluster should not be a production Kubernetes cluster.

When first accessing the dashboard it is possible that there are no services in the istio service mesh. As a result, while the dashboard will display a hexagon icon with your deployed clusterId clicking on it will result in a red error appearing at the top stating “There was an Error Fetching your Service graph.”

To install a demo application that is microservice based inside of your service mesh:

  1. Enable the istio-injection=enabled label for the default namespace.
     $ kubectl label --overwrite namespace default istio-injection=enabled
  2. Install the bookinfo application using a manifest.
     $ kubectl apply -f ./samples/bookinfo/platform/kube/bookinfo.yaml
  3. Install the traffic-generator service that simulate traffic to the productpage service.
     $ kubectl apply -f ./samples/aspenmesh/bookinfo-traffic-generator.yaml
  4. Given a few minutes, perform a refresh in your browser of the client dashboard. You should be able to browse the Aspen Mesh dashboard and obtain a service graph with the bookinfo application deployed inside of your istio service mesh.


To uninstall Aspen Mesh, you’ll need to delete the istio-init and istio helm releases. Using helm delete --purge to do this will allow you to use the same release names in the future.

$ helm delete --purge istio
$ helm delete --purge istio-init


If you installed Aspen Mesh via Helm (helm install) and you wish to upgrade to a newer version of Aspen Mesh, execute the following steps:

  1. Backup your Aspen Mesh configuration:
    $ kubectl get crds | grep '\|\|' | cut -f1-1 -d "." | \
        xargs -n1 -I{} sh -c "kubectl get --all-namespaces -o yaml {}; echo ---" > $HOME/ASPEN_MESH_CONFIG_BACKUP.yaml
  2. Upgrade the istio-init chart to update all the Istio CRDs:
    $ helm upgrade --install --force istio-init install/kubernetes/helm/istio-init --namespace istio-system
  3. Check that all the CRD creation jobs completed successfully to verify that the Kubernetes API server received all the CRDs:
     $ kubectl get job --namespace istio-system | grep istio-init-crd
    1. Verify that all 24 Istio and Aspen Mesh CRDs were committed to the Kubernetes apiserver:
      $ kubectl get crds | grep '\|\|' | wc -l
  4. Follow the steps in Customizing the Mesh to modify the aspenmesh-values file for the istio helm chart.

  5. Upgrade the istio chart:
    $ helm upgrade istio install/kubernetes/helm/istio --namespace istio-system \
      --values install/kubernetes/helm/istio/values-aspenmesh.yaml

Post-upgrade Tasks

After upgrading, some applications will still be using an older sidecar. It is very important to upgrade sidecars immediately after upgrading. To upgrade the sidecars, you will need to re-inject them using one of the following methods:

  1. If you’re using automatic sidecar injection, you can upgrade sidecars by doing a rolling update for all the pods (deleting one pod at a time for a deployment to reduce downtime), so that the new version of the sidecar will be automatically re-injected.

  2. If you’re using manual injection, you can upgrade sidecars by executing the following for each deployment:
    $ kubectl apply -f <(istioctl kube-inject -f $ORIGINAL_DEPLOYMENT_YAML)
  3. If sidecars were previously injected with some customized inject configuration files, you will need to change the version tag in the configuration files to the new version and re-inject sidecars as follows:
    $ kubectl apply -f <(istioctl kube-inject \
        --injectConfigFile inject-config.yaml \
        --filename $ORIGINAL_DEPLOYMENT_YAML)