Hybrid runtime requirements
The requirements listed are the minimum requirements to provision hybrid runtimes in the Codefresh platform.
Hosted runtimes are managed by Codefresh. To provision a hosted runtime as part of Hosted GitOps setup, see Provision a hosted runtime in Set up a hosted (Hosted GitOps) environment.
In the documentation, Kubernetes and K8s are used interchangeably.
Kubernetes cluster requirements
This section lists cluster requirements.
Cluster version
Kubernetes cluster, server version 1.18 and higher, without Argo Project components.
Tip:
To check the server version, runkubectl version --short
.
Ingress controller
Configure your Kubernetes cluster with an ingress controller component that is exposed from the cluster.
Supported ingress controllers
Supported Ingress Controller | Reference |
---|---|
Ambassador | Ambassador ingress controller documentation |
ALB (AWS Application Load Balancer) | AWS ALB ingress controller documentation |
NGINX Enterprise (nginx.org/ingress-controller ) |
NGINX Ingress Controller documentation |
NGINX Community (k8s.io/ingress-nginx ) |
Provider-specific configuration in this article |
Istio | Istio Kubernetes ingress documentation |
Traefik | Traefik Kubernetes ingress documentation |
Ingress controller requirements
-
Valid external IP address
Runkubectl get svc -A
to get a list of services and verify that the EXTERNAL-IP column for your ingress controller shows a valid hostname. -
Valid SSL certificate
For secure runtime installation, the ingress controller must have a valid SSL certificate from an authorized CA (Certificate Authority). -
AWS ALB
In the ingress resource file, verify thatspec.controller
is configured asingress.k8s.aws/alb
.
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: alb
spec:
controller: ingress.k8s.aws/alb
-
Report status
The ingress controller must be configured to report its status. Otherwise, Argo’s health check reports the health status as “progressing” resulting in a timeout error during installation.By default, NGINX Enterprise and Traefik ingress are not configured to report status. For details on configuration settings, see the following sections in this article:
NGINX Enterprise ingress configuration
Traefik ingress configuration
NGINX Enterprise version ingress configuration
The Enterprise version of NGINX (nginx.org/ingress-controller
), both with and without the Ingress Operator, must be configured to report the status of the ingress controller.
Installation with NGINX Ingress
-
Pass the
- -report-ingress-status
todeployment
.spec: containers: - args: - -report-ingress-status
Installation with NGINX Ingress Operator
-
Add this to the
Nginxingresscontrollers
resource file:... spec: reportIngressStatus: enable: true ...
-
Make sure you have a certificate secret in the same namespace as the runtime. Copy an existing secret if you don’t have one.
You will need to add this to theingress-master
when you have completed runtime installation.
NGINX Community version provider-specific ingress configuration
Codefresh has been tested and is supported in major providers. For your convenience, here are provider-specific configuration instructions, both for supported and untested providers.
The instructions are valid for
k8s.io/ingress-nginx
, the community version of NGINX.
AWS
- Apply:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/aws/deploy.yaml - Verify a valid external address exists:
kubectl get svc ingress-nginx-controller -n ingress-nginx
Azure (AKS)
- Apply:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml - Verify a valid external address exists:
kubectl get svc ingress-nginx-controller -n ingress-nginx
Bare Metal Clusters
- Apply:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/baremetal/deploy.yaml - Verify a valid external address exists:
kubectl get svc ingress-nginx-controller -n ingress-nginx
Digital Ocean
- Apply:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/do/deploy.yaml - Verify a valid external address exists:
kubectl get svc ingress-nginx-controller -n ingress-nginx
Docker Desktop
- Apply:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml - Verify a valid external address exists:
kubectl get svc ingress-nginx-controller -n ingress-nginx
Note: By default, Docker Desktop services will provision with localhost as their external address. Triggers in delivery pipelines cannot reach this instance unless they originate from the same machine where Docker Desktop is being used.
Exoscale
- Apply:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml - Verify a valid external address exists:
kubectl get svc ingress-nginx-controller -n ingress-nginx
Google (GKE)
Add firewall rules
GKE by default limits outbound requests from nodes. For the runtime to communicate with the control-plane in Codefresh, add a firewall-specific rule.
- Find your cluster's network:
gcloud container clusters describe [CLUSTER_NAME] --format=get"(network)" - Get the Cluster IPV4 CIDR:
gcloud container clusters describe [CLUSTER_NAME] --format=get"(clusterIpv4Cidr)" - Replace the `[CLUSTER_NAME]`, `[NETWORK]`, and `[CLUSTER_IPV4_CIDR]`, with the relevant values:
gcloud compute firewall-rules create "[CLUSTER_NAME]-to-all-vms-on-network"
--network="[NETWORK]" \
--source-ranges="[CLUSTER_IPV4_CIDR]" \
--allow=tcp,udp,icmp,esp,ah,sctp
Use ingress-nginx
- Create a `cluster-admin` role binding:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
- Apply:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml - Verify a valid external address exists:
kubectl get svc ingress-nginx-controller -n ingress-nginx
MicroK8s
- Install using Microk8s addon system:
microk8s enable ingress - Verify a valid external address exists:
kubectl get svc ingress-nginx-controller -n ingress-nginx
MiniKube
- Install using MiniKube addon system:
minikube addons enable ingress - Verify a valid external address exists:
kubectl get svc ingress-nginx-controller -n ingress-nginx
Oracle Cloud Infrastructure
- Apply:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml - Verify a valid external address exists:
kubectl get svc ingress-nginx-controller -n ingress-nginx
Scaleway
- Apply:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/scw/deploy.yaml - Verify a valid external address exists:
kubectl get svc ingress-nginx-controller -n ingress-nginx
Traefik ingress configuration
To enable the the Traefik ingress controller to report the status, add publishedService
to providers.kubernetesIngress.ingressEndpoint
.
The value must be in the format "<namespace>/<service-name>"
, where:
<service-name>
is the Traefik service from which to copy the status
...
providers:
kubernetesIngress:
ingressEndpoint:
publishedService: "<namespace>/<traefik-service>" # Example, "codefresh/traefik-default" ...
...
Node requirements
- Memory: 5000 MB
- CPU: 2
Runtime namespace permissions for resources
Resource | Permissions Required |
---|---|
ServiceAccount |
Create, Delete |
ConfigMap |
Create, Update, Delete |
Service |
Create, Update, Delete |
Role |
In group rbac.authorization.k8s.io : Create, Update, Delete |
RoleBinding |
In group rbac.authorization.k8s.io : Create, Update, Delete |
persistentvolumeclaims |
Create, Update, Delete |
pods |
Creat, Update, Delete |
Git repository requirements
This section lists the requirements for Git installation repositories.
Git installation repo
If you are using an existing repo, make sure it is empty.
Git access tokens
Codefresh requires two access tokens, one for runtime installation, and the second, a personal token for each user to authenticate Git-based actions in Codefresh.
Git runtime token
The Git runtime token is mandatory for runtime installation.
The token must have valid:
- Expiration date: Default is
30 days
- Scopes:
repo
andadmin-repo.hook
Git user token for Git-based actions
The Git user token is the user’s personal token and is unique to every user. It is used to authenticate every Git-based action of the user in Codefresh. You can add the Git user token at any time from the UI.
The token must have valid:
- Expiration date: Default is
30 days
- Scope:
repo
For detailed information on GitHub tokens, see Creating a personal access token.