Administration and Configuration of XtremeCloud Single Sign-On (SSO)
Note: As a result of feedback that we received, this documentation section is undergoing major revisions and the complete documentation will be re-released shortly. Some partial documentation remains in place. Please check back soon. Docs site Admin.
Introduction
This documentation will provide guidance for the configuration of the Helm Charts needed to configure and deploy XtremeCloud SSO for your use. If you are using a Continuous Integration/Continuous Deployment (CI/CD) provider (like Codefresh), you can configure the Helm Charts along a git feature or topic branch and merge the changes (Merge Request/Pull Request) to a master branch. We like to say that CI is a feature or topic branch merge to the master branch and the CD is using a CD provider to deploy the solution to a Kubernetes Cluster. The Continuous Deployment (CD) is optionally triggered by the CI event. That Kubernetes Cluster can be the classic Dev, Staging, or Production clusters or some variation of that topology.
We will also provide additional details on how you might use a CI/CD provider (like Codefresh), to accelerate your deployment process.
Prerequisites
Prior to the deployment of XtremeCloud Single Sign-On (SSO), XtremeCloud Data Grid-web (your cross-cloud grid) must be routing successfully between your selected CSPs or your XtremeCloud SSO pods in the first CSP will not start.
In a later section of this document, we will detail the actual deployment sequencing of the XtremeCloud services. Briefly, XtremeCloud Data Grid-web Kubernetes pods on the first CSP are deployed followed by XtremeCloud Data Grid-web Kubernetes pods on the second CSP. After it is determined that the cross-cloud grid is up and routing, XtremeCloud SSO pods are deployed on the first CSP followed by the deployment of XtremeCloud SSO pods to a Kubernetes cluster on the second CSP.
Deployment to Two (2) Cloud Service Providers (CSP)
A key decision point is the choice of two (2) CSPs to form your two-way XtremeCloud Service Mesh. We provide you with Helm Charts for all CSPs that we support, so this choice can be deferred until you are ready to begin your deployment process. Within our licensing agreements, you have the option to change out CSPs at your discretion.
The only limitation is that only two (2) CSPs can be used at any point in time as per our licensing agreements. In other words, your Active-Active deployment is a single tenant implementation of XtremeCloud SSO between two (2) CSPs only.
We recommend that you adopt our naming convention for the CSPs. It will make it simpler to refer to a site number during troubleshooting and log reviews and also to file a Support Ticket on our Customer Portal.
Choice of Database Technology
Another key decision point is the selection of the underlying relational database (DB) for XtremeCloud Single Sign-On (SSO). Most organizations have decided on a specific database vendor and will usually extend their database to include the schema associated with XtremeCloud Single Sign-On (SSO).
The same database version must be installed at each CSP. Version differences can lead to inconsistencies in the replication of data and the Eupraxia Labs Support Team will be unable to assist with a problem resolution.
The choices for the relational database are available in the XtremeCloud Applications Certification Matrix.
Deploying with Helm Charts
The table below lists the configurable parameters of the XtremeCloud SSO chart and the default values. We are not going to exhaustively cover every parameter, however you will get good coverage of the essential ones.
Although many defaults are supplied, it is expected that you will want to review this in great detail and provide your own values.
For example, it is likely that you will have a private Docker Image Registry that you will be using. It may or may not be at the same Cloud Service Provider (CSP) to which you are deploying XtremeCloud SSO. You may be providing your own image pull secrets to pull our XtremeCloud SSO image from your private registry.
We list the Quay.io Docker registry as the default in the parameter xcsso.image.repository. We will provide pull secrets to that registry for subscribed customers . XtremeCloud applications ship as Docker images, which are downloaded with pull secrets from the Eupraxia Labs Image Catalog (ELIC). The ELIC is located at Quay.io.
Eupraxia Labs provides the Kubernetes artifacts (resource manifests) and Helm Charts required to manage these containers, as part of an orchestrated stack on certified Kubernetes Clusters.
Parameter | Description | Default |
---|---|---|
init.image.repository |
Init image repository | alpine |
init.image.tag |
Init image tag | 3.8 |
init.image.pullPolicy |
Init image pull policy | IfNotPresent |
clusterDomain |
The internal Kubernetes cluster domain | cluster.local |
xcsso.replicas |
The number of Xtremecloud SSO replicas | 1 |
xcsso.image.repository |
The XtremeCloud SSO image repository | quay.io/eupraxialabs/xtremecloud-sso |
xcsso.image.tag |
The XtremeCloud SSO image tag | 3.0.0-gcp |
xcsso.image.pullPolicy |
The XtremeCloud SSO image pull policy | IfNotPresent |
xcsso.image.pullSecrets |
Image pull secrets | [] |
xcsso.ingress.hosts |
FQDN for Ingress load balancer with TLS/SSL Termination | "" |
xcsso.proxyAddressForwarding |
XtremeCloud SSO is always behind an http proxy | true |
xcsso.xcdgNamespace |
XtremeCloud Data Grid-web Kubernetes namespace | dev |
xcsso.operatingMode |
XtremeCloud SSO mode - Pods are clustered for scalability | clustered |
xcsso.httpPort |
XtremeCloud SSO listening port behind the reverse proxy | 8080 |
xcsso.jgroupsPort |
Unicast peer discovery in HA clusters using TCP | 7600 |
xcsso.jgroupsPortTCPfd |
Used for HA failure detection over TCP | 57600 |
xcsso.wildflyManagementPort |
Wildfly server web management port | 9990 |
xcsso.kubePingPort |
Deprecated - Openshift KUBE_PING Port | 47600 |
xcsso.basepath |
Path where XtremeCloud SSO is hosted | auth |
xcsso.username |
Username for the initial XtremeCloud SSO admin user | admin |
xcsso.password |
Password for the initial XtremeCloud SSO admin user (if keycloak.existingSecret="" ). If not set, a random 10 characters password is created |
"" |
xcsso.existingSecret |
Specifies an existing secret to be used for the admin password | "" |
xcsso.existingSecretKey |
The key in xcsso.existingSecret that stores the admin password |
password |
xcsso.extraInitContainers |
Additional init containers, e. g. for providing the XtremeCloud SSO GCP theme, etc. Passed through the tpl function and thus to be configured a string |
"" |
xcsso.extraContainers |
Additional sidecar containers, e. g. for a database proxy, such as Google’s cloudsql-proxy. Passed through the tpl function and thus to be configured a string |
"" |
xcsso.extraEnv |
Allows the specification of additional environment variables for XtremeCloud SSO. Passed through the tpl function and thus to be configured a string |
PROXY_ADDRESS_FORWARDING="true" |
xcsso.extraVolumeMounts |
Add additional volumes mounts for custom CSP theme. Passed through the tpl function and thus to be configured a string |
"" |
xcsso.extraVolumes |
Add additional volumes, e. g. for custom themes. Passed through the tpl function and thus to be configured a string |
"" |
xcsso.extraPorts |
Add additional ports, e. g. for custom admin console port. Passed through the tpl function and thus to be configured a string |
"" |
xcsso.podDisruptionBudget |
Pod disruption budget | {} |
xcsso.priorityClassName |
Pod priority classname | {} |
xcsso.resources |
Pod resource requests and limits | {} |
xcsso.affinity |
Pod affinity. Passed through the tpl function and thus to be configured a string |
Hard node and soft zone anti-affinity |
xcsso.nodeSelector |
Node labels for pod assignment | {} |
xcsso.tolerations |
Node taints to tolerate | [] |
xcsso.podLabels |
Extra labels to add to pod | {} |
xcsso.podAnnotations |
Extra annotations to add to pod | {} |
xcsso.hostAliases |
Mapping between IP and hostnames that will be injected as entries in the pod’s hosts files | [] |
xcsso.enableServiceLinks |
Indicates whether information about services should be injected into pod’s environment variables, matching the syntax of Docker links | false |
xcsso.serviceAccount.create |
If true , a new service account is created |
false |
xcsso.securityContext |
Security context for the entire pod. Every container running in the pod will inherit this security context. This might be relevant when other components of the environment inject additional containers into running pods (service meshs are the most prominent example for this) | {fsGroup: 1000} |
xcsso.containerSecurityContext |
Security context for containers running in the pod. Will not be inherited by additionally injected containers | {runAsUser: 1000, runAsNonRoot: true} |
xcsso.preStartScript |
Custom script to run before XtremeCloud SSO starts up | `` |
xcsso.lifecycleHooks |
Container lifecycle hooks. Passed through the tpl function and thus to be configured a string |
`` |
xcsso.extraArgs |
Additional arguments to the start command | `` |
xcsso.livenessProbe.initialDelaySeconds |
Liveness Probe initialDelaySeconds |
120 |
xcsso.livenessProbe.timeoutSeconds |
Liveness Probe timeoutSeconds |
5 |
xcsso.readinessProbe.initialDelaySeconds |
Readiness Probe initialDelaySeconds |
30 |
xcsso.readinessProbe.timeoutSeconds |
Readiness Probe timeoutSeconds |
1 |
xcsso.cli.nodeIdentifier |
WildFly CLI script for setting the node identifier | See values.yaml |
xcsso.cli.logging |
WildFly CLI script for logging configuration | See values.yaml |
xcsso.cli.ha |
Settings for HA setups | See values.yaml |
xcsso.cli.custom |
Additional custom WildFly CLI script | "" |
xcsso.service.annotations |
Annotations for the XtremeCloud SSO service | {} |
xcsso.service.labels |
Additional labels for the XtremeCloud SSO service | {} |
xcsso.service.type |
The service type | ClusterIP |
xcsso.service.port |
The service port | 80 |
xcsso.service.nodePort |
The node port used if the service is of type NodePort |
"" |
xcsso.ingress.enabled |
if true , an ingress is created |
false |
xcsso.ingress.annotations |
annotations for the ingress | {} |
xcsso.ingress.labels |
Additional labels for the XtremeCloud SSO ingress | {} |
xcsso.ingress.path |
Path for the ingress | / |
xcsso.ingress.hosts |
a list of ingress hosts | [sso.example.com] |
xcsso.ingress.tls |
a list of IngressTLS items | [] |
xcsso.persistence.dbVendor |
One of oracle , h2 , postgres , mysql , or mariadb (if deployPostgres=false ) |
oracle |
xcsso.persistence.dbName |
The name of the database to connect to (if deployPostgres=false ) |
keycloak |
xcsso.persistence.dbHost |
The database host name (if deployPostgres=false ) |
mykeycloak |
xcsso.persistence.dbPort |
The database host port (if deployPostgres=false ) |
1521 |
xcsso.persistence.dbUser |
The database user (if deployPostgres=false ) |
keycloak |
xcsso.persistence.dbPassword |
The database password (if deployPostgres=false ) |
"" |
test.enabled |
If true , test pods get scheduled |
true |
test.image.repository |
Test image repository | unguiculus/docker-python3-phantomjs-selenium |
test.image.tag |
Test image tag | v1 |
test.image.pullPolicy |
Test image pull policy | IfNotPresent |
test.securityContext |
Security context for the test pod. Every container running in the pod will inherit this security context. This might be relevant when other components of the environment inject additional containers into the running pod (service meshs are the most prominent example for this) | {fsGroup: 1000} |
test.containerSecurityContext |
Security context for containers running in the test pod. Will not be inherited by additionally injected containers | {runAsUser: 1000, runAsNonRoot: true} |
If you are not deploying using a CI/CD Provider, specify each parameter using the --set key=value[,key=value]
argument to helm install
.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
$ helm install --name sso-dev -f values.yaml eupraxialabs/xtremecloud-sso-gcp
As previously stated, it is through the Helm Chart values.yaml file where desired values are edited. As can be see here, it is very easy to edit the values in the Codefresh user interface.
Usage of the tpl
Function
The tpl
function allows us to pass string values from values.yaml
through the templating engine. It is used for the following values:
xcsso.extraInitContainers
xcsso.extraContainers
xcsso.extraEnv
xcsso.affinity
xcsso.extraVolumeMounts
xcsso.extraVolumes
It is important that these values be configured as strings. Otherwise, installation will fail. See example for Google Cloud Proxy or default affinity configuration in values.yaml
.
Database Setup
The supported versions of XtremeCloud SSO connect to databases that are external to the Kubernetes Cluster. Those databases are either virtual machine (VM)-based or installed on raw iron.
Using an External Database
XtremeCloud SSO uses versions of PostgreSQL, MySQL, MariaDB, and Oracle RAC. The password for the database user is read from a Kubernetes secret. It is possible to specify an existing secret that is not managed with this chart. The key in the secret, from which the password is read, may be specified as well (defaults to password
).
xcsso:
## Persistence configuration
persistence:
# The database vendor. Can be one of "oracle", "postgres", "mysql", or "mariadb"
dbVendor: "mysql"
# The key in the existing secret that stores the password
existingSecretKey: "password"
dbHost: "db1.company.com"
dbName: "keycloak"
# Change dbPort to reflect vendor choice. Should be one of the well-known ports:
# oracle - 1521
# mysql - 3306
# mariadb - 3306
# postgres - 5432
dbPort: "3306" # <=== MySQL
dbUser: "keycloak"
# Only used if no existing secret is specified. In this case a new secret is created
dbPassword: ""
Configuring Additional Environment Variables
xcsso:
extraEnv: |
- name: KEYCLOAK_LOGLEVEL
value: DEBUG
- name: WILDFLY_LOGLEVEL
value: DEBUG
- name: CACHE_OWNERS
value: "3"
- name: DB_QUERY_TIMEOUT
value: "60"
- name: DB_VALIDATE_ON_MATCH
value: true
- name: DB_USE_CAST_FAIL
value: false
Providing a Custom Theme
We provide a branded XtremeCloud SSO image that includes a cloud-specific theme. The cloud-specific theme makes it very easy, through your browser to determine which CSP is providing your XtremeCloud SSO services. However, you can use this same process to use an init container as your own custom theme provider.
Create your own theme and package it up into a Docker image.
FROM busybox
COPY my_theme /my_theme
In combination with an emptyDir
that is shared with the XtremeCloud SSO container, we configure an init container that runs and copies the theme over to the right place where XtremeCloud SSO will pick it up automatically.
xcsso:
extraInitContainers: |
- name: theme-provider
image: quay.io/eupraxialabs/gcp-theme:1
imagePullPolicy: IfNotPresent
command:
- sh
args:
- -c
- |
echo "Copying theme..."
cp -R /xcsso-gcp/* /theme
volumeMounts:
- name: theme
mountPath: /theme
extraVolumeMounts: |
- name: theme
mountPath: /opt/jboss/keycloak/themes/xcsso-gcp
extraVolumes: |
- name: theme
emptyDir: {}
Setting a Custom Realm
A realm can be added by creating a secret or configMap for the realm json
file and then supplying this into the chart.
It could be mounted using extraVolumeMounts
and then specified in extraArgs
using -Dxcsso.import
.
First we could create a Secret from a json
file using kubectl create secret generic realm-secret --from-file=realm.json
which we need to reference in values.yaml
:
xcsso:
extraVolumes: |
- name: realm-secret
secret:
secretName: realm-secret
extraVolumeMounts: |
- name: realm-secret
mountPath: "/realm/"
readOnly: true
extraArgs: -Dkeycloak.import=/realm/realm.json
Alternatively, the file could be added to a custom image (set in xcsso.image
) and then referenced by -Dkeycloak.import
.
After startup the web administration console for the realm will be available on the path /auth/admin/<realm name>/console/.
Using Google Cloud SQL Proxy
Depending on your environment you may need a local proxy to connect to the database. This is, for example, the case for Google Kubernetes Engine (GKE) when using Google Cloud SQL. Create the secret for the credentials as documented here and configure the proxy as a sidecar.
Note: This is only an example. Google Cloud SQL is not a supported database of XtremeCloud SSO.
Because
xcsso.extraContainers
is a string that is passed through thetpl
function, it is possible to create custom values and use them in the string.
# Custom values for Google Cloud SQL
cloudsql:
project: my-project
region: europe-west1
instance: my-instance
xcsso:
extraContainers: |
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command:
- /cloud_sql_proxy
args:
- -instances=::=tcp:5432
- -credential_file=/secrets/cloudsql/credentials.json
volumeMounts:
- name: cloudsql-creds
mountPath: /secrets/cloudsql
readOnly: true
extraVolumes: |
- name: cloudsql-creds
secret:
secretName: cloudsql-instance-credentials
High Availability and Clustering
For high availability, XtremeCloud SSO is run with multiple replicas (xcsso.replicas > 1
).
XtremeCloud SSO uses XtremeCloud Data Grid-web for caching. The XtremeCloud Data Grid-web caches on the XtremeCloud SSO side must be configured with the remoteStore attribute to ensure that data is saved to the remote cache. These caches are replicated across all instances forming a cluster.
If xcsso.replicas > 1
, the KUBE_PING that is configured for JGroups takes care of the cluster discovery and the starting XtremeCloud SSO pods discover each other. We use a Kubernetes ConfigMap to inject a standalone-ha.xml
file into the XtremeCloud SSO pod as it starts up to configure the Wildfly
application server for KUBE_PING, the discovery protocol for JGroups cluster nodes managed by Kubernetes, and much more.
Info: Kubernetes discovery protocol for JGroups: In regular host-based deployments, XtremeCloud Data Grid-web would JGroups to find each node. Since we only support a deployment to Kubernetes, the KUBE_PING discovery protocol for JGroups clusters is used instead. This discovery protocol is the result of Kubernetes asking for a list of the IP addresses of all cluster nodes, combined with bind_port/port_range. The protocol then sends a discovery request to all instances and waits for their responses. When a discovery is initiated, KUBE_PING asks Kubernetes for a list of the IP addresses of all the pods it launched, matching the given namespace and labels.
If XtremeCloud SSO and XtremeCloud Data Grid-web are in the same namespace, you will see messages like this from the XtremeCloud SSO pods:
Oct 14 07:53:35 sso-dev-xtremecloud-sso-gcp-0 xtremecloud-sso-gcp 12:53:35,521 WARN [org.jgroups.protocols.TCP] (thread-2407,kubernetes,10.52.0.25(site-id=site1, rack-id=null, machine-id=null)) JGRP000012: discarded message from different cluster cluster (our cluster is kubernetes). Sender was 5a06d096-8fdb-2a09-d58f-fc01679b94d9 (received 3 identical messages from 5a06d096-8fdb-2a09-d58f-fc01679b94d9 in the last 82194 ms)
Further, you will see messages like this from the XtremeCloud Data Grid-web pods:
Oct 14 07:53:41 datagrid-dev-xtremecloud-datagrid-gcp-0 xtremecloud-datagrid-gcp 12:53:41,483 WARN [org.jgroups.protocols.TCP] (jgroups-6413,datagrid-dev-xtremecloud-datagrid-gcp-0) JGRP000012: discarded message from different cluster kubernetes (our cluster is cluster). Sender was 84b97c2c-4712-8824-27ca-ad6e2e7f7244 (flags=0), site-id=site1, rack-id=null, machine-id=null) (received 3 identical messages from 84b97c2c-4712-8824-27ca-ad6e2e7f7244 (flags=0), site-id=site1, rack-id=null, machine-id=null) in the last 61759 ms)
Since the suffix sso-dev
snd datagrid-dev
is the <product-namespace> it is clear why Kubernetes is discarding the discovery for the respective clusters it is not managing.
Here’s an excerpt of the JGroups configuration in our XtremeCloud SSO standalone-ha.xml
file:
<subsystem xmlns="urn:jboss:domain:jgroups:6.0">
<channels default="kubernetes">
<channel name="kubernetes" stack="tcp"/>
</channels>
<stacks default="tcp">
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp" site="site2">
<property name="external_addr">${env.INTERNAL_POD_IP}</property>
</transport>
<protocol type="kubernetes.KUBE_PING">
<property name="namespace">${env.POD_NAMESPACE}</property>
</protocol>
Recovering from a Split-Brain in the XtremeCloud SSO Cluster
Let’s take a look at a network partition issue, which in this case is in a virtual subnet that the XtremeCloud SSO pod is running in. Note the ISPN000136 error in the stacktrace captured in LogDNA. Note that the error is in our source code of *PartitionHandlingInterceptor.java line 154.
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp 19:29:35,070 ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (default task-17) ISPN000136: Error executing command GetKeyValueCommand, writing keys []: org.infinispan.partitionhandling.AvailabilityException: ISPN000306: Key '26d7999b-a007-49f5-b3f5-423742661b46' is not available. Not all owners are in this partition 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.partitionhandling.impl.PartitionHandlingInterceptor.handleDataReadReturn(PartitionHandlingInterceptor.java:154) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.InvocationFinallyAction.apply(InvocationFinallyAction.java:21) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.impl.SimpleAsyncInvocationStage.addCallback(SimpleAsyncInvocationStage.java:70) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.InvocationStage.andFinally(InvocationStage.java:60) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndFinally(BaseAsyncInterceptor.java:157) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.partitionhandling.impl.PartitionHandlingInterceptor.handleDataReadCommand(PartitionHandlingInterceptor.java:140) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.partitionhandling.impl.PartitionHandlingInterceptor.visitGetKeyValueCommand(PartitionHandlingInterceptor.java:130) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:39) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndHandle(BaseAsyncInterceptor.java:183) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.impl.BaseStateTransferInterceptor.handleReadCommand(BaseStateTransferInterceptor.java:185) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.impl.BaseStateTransferInterceptor.visitGetKeyValueCommand(BaseStateTransferInterceptor.java:168) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:39) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:54) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.DDAsyncInterceptor.handleDefault(DDAsyncInterceptor.java:54) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.DDAsyncInterceptor.visitGetKeyValueCommand(DDAsyncInterceptor.java:106) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:39) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndExceptionally(BaseAsyncInterceptor.java:123) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.impl.InvocationContextInterceptor.visitCommand(InvocationContextInterceptor.java:90) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:56) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.DDAsyncInterceptor.handleDefault(DDAsyncInterceptor.java:54) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.DDAsyncInterceptor.visitGetKeyValueCommand(DDAsyncInterceptor.java:106) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:39) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.DDAsyncInterceptor.visitCommand(DDAsyncInterceptor.java:50) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invoke(AsyncInterceptorChainImpl.java:248) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.cache.impl.CacheImpl.get(CacheImpl.java:479) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.cache.impl.CacheImpl.get(CacheImpl.java:472) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.cache.impl.AbstractDelegatingCache.get(AbstractDelegatingCache.java:348) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.cache.impl.EncoderCache.get(EncoderCache.java:659) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.infinispan.cache.impl.AbstractDelegatingCache.get(AbstractDelegatingCache.java:348) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.keycloak.models.sessions.infinispan.changes.InfinispanChangelogBasedTransaction.get(InfinispanChangelogBasedTransaction.java:120) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.keycloak.models.sessions.infinispan.InfinispanUserSessionProvider.getClientSessionEntity(InfinispanUserSessionProvider.java:289) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.keycloak.models.sessions.infinispan.InfinispanUserSessionProvider.getClientSession(InfinispanUserSessionProvider.java:283) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.keycloak.models.sessions.infinispan.UserSessionAdapter.getAuthenticatedClientSessionByClient(UserSessionAdapter.java:119) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.keycloak.protocol.oidc.TokenManager.attachAuthenticationSession(TokenManager.java:410) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.keycloak.authentication.AuthenticationProcessor.attachSession(AuthenticationProcessor.java:965) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.keycloak.services.managers.AuthenticationManager.finishedRequiredActions(AuthenticationManager.java:870) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.keycloak.authentication.AuthenticationProcessor.authenticationComplete(AuthenticationProcessor.java:1008) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.keycloak.authentication.AuthenticationProcessor.authenticate(AuthenticationProcessor.java:781) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.keycloak.protocol.AuthorizationEndpointBase.handleBrowserAuthenticationRequest(AuthorizationEndpointBase.java:139) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.keycloak.protocol.oidc.endpoints.AuthorizationEndpoint.buildAuthorizationCodeAuthorizationResponse(AuthorizationEndpoint.java:419) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.keycloak.protocol.oidc.endpoints.AuthorizationEndpoint.process(AuthorizationEndpoint.java:152) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.keycloak.protocol.oidc.endpoints.AuthorizationEndpoint.buildGet(AuthorizationEndpoint.java:108) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at java.lang.reflect.Method.invoke(Method.java:498) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:140) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.ResourceMethodInvoker.internalInvokeOnTarget(ResourceMethodInvoker.java:509) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTargetAfterFilter(ResourceMethodInvoker.java:399) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.ResourceMethodInvoker.lambda$invokeOnTarget$0(ResourceMethodInvoker.java:363) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.interception.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:358) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:365) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:337) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:137) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:106) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:132) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:100) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:443) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.SynchronousDispatcher.lambda$invoke$4(SynchronousDispatcher.java:233) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.SynchronousDispatcher.lambda$preprocess$0(SynchronousDispatcher.java:139) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.interception.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:358) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.SynchronousDispatcher.preprocess(SynchronousDispatcher.java:142) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:219) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:227) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at javax.servlet.http.HttpServlet.service(HttpServlet.java:791) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.keycloak.services.filters.KeycloakSessionServletFilter.doFilter(KeycloakSessionServletFilter.java:90) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:132) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.wildfly.extension.undertow.deployment.GlobalRequestControllerHandler.handleRequest(GlobalRequestControllerHandler.java:68) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:292) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:81) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:138) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:135) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1502) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1502) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1502) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1502) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:272) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:104) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.server.Connectors.executeRootHandler(Connectors.java:360) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:830) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1487) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1378) 10.52.3.3
Oct 9 14:29:35 sso-dev-xtremecloud-sso-gcp-1 xtremecloud-sso-gcp at java.lang.Thread.run(Thread.java:748) 10.52.3.3
Let’s check and see if we are in a degraded mode
:
[centos@vm-controller bin]$ k logs -f sso-dev-xtremecloud-sso-gcp-0 |grep -i degraded
21:03:56,159 WARN [org.infinispan.CLUSTER] (remote-thread--p12-t5) [Context=loginFailures] ISPN000316: Lost data because of graceful leaver 10.52.3.21(site-id=site1, rack-id=null, machine-id=null), entering degraded mode
21:03:56,160 INFO [org.infinispan.CLUSTER] (remote-thread--p12-t5) [Context=loginFailures] ISPN100011: Entering availability mode DEGRADED_MODE, topology id 6
21:03:56,169 WARN [org.infinispan.CLUSTER] (remote-thread--p12-t7) [Context=offlineClientSessions] ISPN000316: Lost data because of graceful leaver 10.52.3.21(site-id=site1, rack-id=null, machine-id=null), entering degraded mode
21:03:56,169 INFO [org.infinispan.CLUSTER] (remote-thread--p12-t7) [Context=offlineClientSessions] ISPN100011: Entering availability mode DEGRADED_MODE, topology id 6
21:03:56,258 WARN [org.infinispan.CLUSTER] (remote-thread--p12-t8) [Context=clientSessions] ISPN000316: Lost data because of graceful leaver 10.52.3.21(site-id=site1, rack-id=null, machine-id=null), entering degraded mode
21:03:56,258 INFO [org.infinispan.CLUSTER] (remote-thread--p12-t8) [Context=clientSessions] ISPN100011: Entering availability mode DEGRADED_MODE, topology id 6
21:03:56,274 WARN [org.infinispan.CLUSTER] (remote-thread--p12-t9) [Context=offlineSessions] ISPN000316: Lost data because of graceful leaver 10.52.3.21(site-id=site1, rack-id=null, machine-id=null), entering degraded mode
21:03:56,275 INFO [org.infinispan.CLUSTER] (remote-thread--p12-t9) [Context=offlineSessions] ISPN100011: Entering availability mode DEGRADED_MODE, topology id 6
Degraded mode
has some interesting implications.
While monitoring the logs for the pods of XtremeCloud SSO and XtremeCloud Data Grid-web, respectivley, you may see discovery messages being discarded like this:
13:47:18,066 WARN [org.jgroups.protocols.TCP] (thread-881,kubernetes,10.52.0.27(site-id=site1, rack-id=null, machine-id=null)) JGRP000012: discarded message from different cluster cluster (our cluster is kubernetes). Sender was 5a06d096-8fdb-2a09-d58f-fc01679b94d9 (received 3 identical messages from 5a06d096-8fdb-2a09-d58f-fc01679b94d9 in the last 73285 ms)
Note: This is in Site1 (GCP) (site-id=site1)
This message is as expected, since the XtremeCloud SSO cluster is named kubernetes.
13:46:28,906 WARN [org.jgroups.protocols.TCP] (jgroups-7176,datagrid-dev-xtremecloud-datagrid-gcp-0) JGRP000012: discarded message from different cluster kubernetes (our cluster is cluster). Sender was ec59a048-450c-9bea-e28d-02e2a6b15617 (flags=0), site-id=site1, rack-id=null, machine-id=null) (received 3 identical messages from ec59a048-450c-9bea-e28d-02e2a6b15617 (flags=0), site-id=site1, rack-id=null, machine-id=null) in the last 64583 ms)
Note: This is in Site1 (GCP) (site-id=site1)
This message is as expected, since the XtremeCloud Data Grid cluster is named cluster.
Exec into a XtremeCloud SSO pod and review some information about the product and its environment:
[centos@vm-controller ~]$ kubectl exec -it sso-dev-xtremecloud-sso-gcp-0 bash
[jboss@sso-dev-xtremecloud-sso-gcp-0 bin]$ ./jboss-cli.sh
You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands.
[disconnected /] connect 10.52.0.27
[standalone@10.52.0.27:9990 /] :product-info()
{
"outcome" => "success",
"result" => [{"summary" => {
"host-name" => "sso-dev-xtremecloud-sso-gcp-0",
"instance-identifier" => "f4f95003-9a61-496e-b894-ef4522f21387",
"product-name" => "Keycloak",
"product-version" => "4.8.3.Final",
"product-community-identifier" => "Product",
"product-home" => "/opt/jboss/keycloak",
"standalone-or-domain-identifier" => "STANDALONE_SERVER",
"host-operating-system" => "CentOS Linux 7 (Core)",
"host-cpu" => {
"host-cpu-arch" => "amd64",
"host-core-count" => 1
},
"jvm" => {
"name" => "OpenJDK 64-Bit Server VM",
"java-version" => "1.8",
"jvm-version" => "1.8.0_201",
"jvm-vendor" => "Oracle Corporation",
"java-home" => "/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.201.b09-2.el7_6.x86_64/jre"
}
}}]
}
Let’s describe one of the XtremeCloud SSO pods using kubectl
:
[centos@vm-controller bin]$ kubectl describe pod sso-dev-xtremecloud-sso-gcp-0
Name: sso-dev-xtremecloud-sso-gcp-0
Namespace: dev
Priority: 0
PriorityClassName: <none>
Node: gke-xtremecloud-cluster--default-pool-d03f8f59-pzkc/10.128.0.26
Start Time: Thu, 17 Oct 2019 08:42:57 -0500
Labels: app=xtremecloud-sso-gcp
controller-revision-hash=sso-dev-xtremecloud-sso-gcp-7c4fbd485d
release=sso-dev
statefulset.kubernetes.io/pod-name=sso-dev-xtremecloud-sso-gcp-0
Annotations: <none>
Status: Running
IP: 10.52.0.28
Controlled By: StatefulSet/sso-dev-xtremecloud-sso-gcp
Containers:
xtremecloud-sso-gcp:
Container ID: docker://36a395491b98d04e5b8d85d634b2bd06d81bb7c6995d717d76d721068cd04824
Image: quay.io/eupraxialabs/xtremecloud-sso:3.0.2-gcp
Image ID: docker-pullable://quay.io/eupraxialabs/xtremecloud-sso@sha256:7c7761baedee9721e0f1039bbdb66f7b6edc86b531f8e529dda5f2456f601ee5
Ports: 8080/TCP, 7600/TCP, 9990/TCP, 47600/TCP, 57600/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
-b $(INTERNAL_POD_IP)
-Djgroups.bind_addr=global
-Djboss.node.name=$(INTERNAL_POD_IP)
-Dinfinispan.stagger.delay=0
State: Running
Started: Thu, 17 Oct 2019 08:42:58 -0500
Ready: True
Restart Count: 0
Limits:
cpu: 300m
memory: 1Gi
Requests:
cpu: 200m
memory: 1Gi
Liveness: http-get http://:8080/auth/ delay=600s timeout=15s period=10s #success=1 #failure=3
Readiness: http-get http://:8080/auth/realms/master delay=60s timeout=5s period=10s #success=1 #failure=3
Environment:
XTREMECLOUD_DG_NAMESPACE: dev
POD_NAMESPACE: dev (v1:metadata.namespace)
INTERNAL_POD_IP: (v1:status.podIP)
KUBERNETES_NAMESPACE: dev (v1:metadata.namespace)
KUBERNETES_PING_LABELS: <set to the key 'kube.ping.labels' of config map 'xcsso-keycloak-config'> Optional: false
KEYCLOAK_LOGLEVEL: INFO
PROXY_ADDRESS_FORWARDING: true
KEYSTORE_USER: admin
KEYSTORE_PASSWORD: <redacted>
OPERATING_MODE: clustered
KEYCLOAK_USER: <set to the key 'user' in secret 'xcsso-keycloak-config'> Optional: false
KEYCLOAK_PASSWORD: <set to the key 'password' in secret 'xcsso-keycloak-config'> Optional: false
POSTGRESQL_DATABASE: <set to the key 'database' in secret 'keycloak-db'> Optional: false
POSTGRESQL_USER: <set to the key 'user' in secret 'keycloak-db'> Optional: false
POSTGRESQL_PASSWORD: <set to the key 'password' in secret 'keycloak-db'> Optional: false
POSTGRESQL_ADMIN_PASSWORD: <set to the key 'password' in secret 'xcsso-keycloak-config'> Optional: false
POSTGRESQL_HOST: <set to the key 'db.host' in secret 'xcsso-keycloak-config'> Optional: false
POSTGRESQL_PORT: <set to the key 'db.port' in secret 'xcsso-keycloak-config'> Optional: false
Mounts:
/opt/jboss/keycloak/standalone/configuration/keycloak.jks from config-volume-keystore (rw)
/opt/jboss/keycloak/standalone/configuration/standalone-ha.xml from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hltdk (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: standalone-ha-xml
Optional: false
config-volume-keystore:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: keycloak-jks
Optional: false
default-token-hltdk:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hltdk
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Some key takeaways are:
- The
Mount
for the Java Keystore (jks).Note: The Java Keystore is only needed if SSL is configured for the Wildfly application server that XtremeCloud SSO is deployed on because traffic encryption all the way to the container is required. With Aspen Mesh (Istio) deployed, this is not necessary.
- The
Mount
for the filestandalone-ha.xml
Note: This is for our high availability mode for the XtremeCloud SSO pods. The mount inject the configMap into a pod as it starts up.
- The operating mode is:
clustered
for HA ModeNote: OPERATING_MODE: clustered which is configured in the
values.yaml
in the Helm Chart (operatingMode: "clustered"
) - The XtremeCloud SSO pods are behind a reverse proxy, which can be configured to perform SSL passthrough if Aspen Mesh (Istio) mTLS sidecars are not used to encrypt traffic all the way to the pod.
Note: See environment variable PROXY_ADDRESS_FORWARDING: true which is configured in the
values.yaml
in the Helm Chart (proxyAddressForwarding: "true"
)
Let’s take a look at the cause of this error:
Oct 20 14:30:03 sso-dev-xtremecloud-sso-azure-0 xtremecloud-sso-azure 19:30:03,798 ERROR [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0362: Capabilities required by resource '/subsystem=undertow' are not available: