Checking the status of the platform

Once all previous steps have been completed and all components have been installed, it is necessary to make sure that each individual process taking part in the deployment is running. You can see the namespaces created by issuing the following command. Please note that the actual output in your system might defer depending on the Quobis wac version that you are deploying:

$ kubectl get namespaces

NAME              STATUS   AGE
cert-manager      Active   90d
default           Active   91d
ingress-nginx     Active   91d
kube-node-lease   Active   91d
kube-public       Active   91d
kube-system       Active   91d
monitoring        Active   91d
velero            Active   34d
wac               Active   89d

The “default” namespace will be named accordingly to the “namespace” variable value that you have given in the Ansible main.yml configuration step.

On a kubernetes based deployment, these processes correspond with pods running inside the kubernetes cluster. Pods are the smallest, most basic deployable objects in Kubernetes. A pod represents a single instance of a running process in your cluster. Pods contain one or more containers, such as Docker containers. When a pod runs multiple containers, the containers are managed as a single entity and share the pod’s resources.

To get the list of deployed pods, kubectl must be used. The kubectl command line tool lets you control a Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. From a technical point of view, kubectl is a client for the Kubernetes REST API. In our case, just run the following command into a terminal and this will output something similar to this output below. Please note that the actual output in your system might be slightly different depending on the Quobis wac version that you are deploying, this specific one fits for v5.9.0:

$ kubectl get pods -A

NAMESPACE       NAME                                           READY   STATUS    RESTARTS        AGE
cert-manager    cert-manager-99bb69456-kmdkc                   1/1     Running   0               31d
cert-manager    cert-manager-cainjector-ffb4747bb-rvmpp        1/1     Running   0               31d
cert-manager    cert-manager-webhook-545bd5d7d8-w4qc2          1/1     Running   0               31d
default         dbgate-574bc5d58c-jb8h8                        1/1     Running   0               31d
default         efs-provisioner-5b4fd7786c-d6nt6               1/1     Running   0               33d
default         mailhog-8b5bfd58b-f7fsm                        1/1     Running   0               31d
ingress-nginx   ingress-nginx-controller-6d846bd47b-zft79      1/1     Running   0               36d
kube-system     aws-node-j6w57                                 1/1     Running   0               68d
kube-system     aws-node-qdh59                                 1/1     Running   0               30d
kube-system     coredns-6866f5c8b4-7r4dt                       1/1     Running   0               31d
kube-system     coredns-6866f5c8b4-vgtv6                       1/1     Running   0               31d
kube-system     ebs-csi-controller-97c95489b-8ms2k             6/6     Running   0               31d
kube-system     ebs-csi-controller-97c95489b-k7bl8             6/6     Running   0               31d
kube-system     ebs-csi-node-5b2zf                             3/3     Running   0               68d
kube-system     ebs-csi-node-zsb9p                             3/3     Running   0               30d
kube-system     efs-csi-controller-5b44d5b6b-bthbw             3/3     Running   0               68d
kube-system     efs-csi-controller-5b44d5b6b-kc6r9             3/3     Running   0               31d
kube-system     efs-csi-node-8wsdf                             3/3     Running   0               30d
kube-system     efs-csi-node-zpz4n                             3/3     Running   0               68d
kube-system     kube-proxy-7q4kp                               1/1     Running   0               68d
kube-system     kube-proxy-tnzhn                               1/1     Running   0               30d
monitoring      grafana-54456fb674-tfsmq                       1/1     Running   0               27d
monitoring      kube-state-metrics-5fb588998d-tdmzf            1/1     Running   0               33d
monitoring      loki-5fb6b44b79-wm49v                          1/1     Running   0               32d
monitoring      metrics-server-6cb96b5c97-6t5h8                1/1     Running   0               33d
monitoring      metrics-server-6cb96b5c97-gjkqz                1/1     Running   0               33d
monitoring      node-exporter-px5v9                            1/1     Running   0               30d
monitoring      node-exporter-w55d7                            1/1     Running   0               33d
monitoring      prometheus-765c56fb54-pqtc5                    1/1     Running   1 (21d ago)     21d
monitoring      promtail-ckv2g                                 1/1     Running   0               33d
monitoring      promtail-q9h5q                                 1/1     Running   0               30d
velero          velero-56d599574f-sw6mf                        1/1     Running   0               34d
wac             audiomixer-sfu-1-0                             1/1     Running   1 (21h ago)     32d
wac             database-mongo-0                               2/2     Running   0               28d
wac             database-mongo-1                               2/2     Running   0               28d
wac             database-mongo-2                               2/2     Running   0               28d
wac             erebus-74bbf998c8-cq4vn                        1/1     Running   0               23d
wac             keycloak-64b8b6b5d8-btvmj                      1/1     Running   0               31d
wac             kitter-67d86469f8-djlxh                        1/1     Running   0               23d
wac             manager-6db5cc4c4-p2qvf                        1/1     Running   0               42d
wac             message-broker-7c958b46b4-dwp6v                1/1     Running   1 (5d21h ago)   28d
wac             mongodb-kubernetes-operator-86cf76bb88-2frz6   1/1     Running   0               31d
wac             postgresql-exporter-5f446765f8-ddt66           1/1     Running   0               31d
wac             postgresql-fcdf6cf9-xhdvw                      1/1     Running   0               31d
wac             qss-audiomixerio-sfu-1-5f9b4cc47d-5t5j7        1/1     Running   0               23d
wac             qss-audiostatus-6b847b74dd-9bq29               1/1     Running   0               23d
wac             qss-auth-http-74f455d976-ff54s                 1/1     Running   0               23d
wac             qss-calls-588494879c-cf2gx                     1/1     Running   0               23d
wac             qss-calltransfer-basic-847bf96965-xxz9x        1/1     Running   0               23d
wac             qss-conference-state-f88f7f795-5srdq           1/1     Running   0               23d
wac             qss-invites-rooms-7f5f48865b-fflzr             1/1     Running   0               23d
wac             qss-io-websockets-6cf9c7fc94-w8wr9             1/1     Running   0               23d
wac             qss-log-conference-65964b85b6-bnv4f            1/1     Running   0               23d
wac             qss-meeting-basic-55b88b4469-mqn8z             1/1     Running   0               23d
wac             qss-peer-jt-855bbbbcdd-ms8mm                   1/1     Running   0               23d
wac             qss-quick-conference-6d565f958-zm22k           1/1     Running   0               23d
wac             qss-registry-authenticated-8648f484db-48spr    1/1     Running   1 (22d ago)     23d
wac             qss-resolver-wac-5cc787f85f-lcktg              1/1     Running   0               23d
wac             qss-rooms-basic-685f87f669-94tgd               1/1     Running   0               23d
wac             qss-trunk-58777cf66b-mbpzs                     1/1     Running   0               23d
wac             qss-watchdog-invites-7b96799fb5-b2phq          1/1     Running   0               23d
wac             qss-watchdog-registry-79d8587fd4-tzphv         1/1     Running   0               23d
wac             sfu-1-0                                        1/1     Running   5 (21h ago)     32d
wac             sfu-dispatcher-9fc849b6c-5gfhr                 1/1     Running   0               23d
wac             sfu-wrapper-sfu-1-78689c7bf-k2256              1/1     Running   2 (22h ago)     23d
wac             sippo-storage-54c9fd46d-fln24                  1/1     Running   0               32d
wac             sippo-storage-54c9fd46d-sbsv2                  1/1     Running   0               31d
wac             wac-core-786d7bff99-8ll9j                      1/1     Running   0               12d
wac             wac-core-786d7bff99-g4wkf                      1/1     Running   0               12d
wac             webphone-angular-9f5c6cc97-vt2fk               1/1     Running   0               22h
wac             xmpp-server-cb78989d4-j52vd                    2/2     Running   27 (13d ago)    28d

On this list you should be able to see at least one pod per role configured in the Ansible script and all pods should have the STATUS column set to Running. If some element is missing you should consider restarting the installation process making sure all steps have been followed correctly.

If one or more pods have a status different from Running, it means that those services have encountered an error. To to show details of a specific pod and related resources on a terminal, run the following command:

$ kubectl describe pod <name-of-the-pod>

The following is an example output for a describe of a wac-core pod:

Name:           wac-core-6b7c6f7684-lsxp5
Namespace:      poc
Priority:       0
Node:           ip-172-32-33-233.eu-west-1.compute.internal/172.32.33.233
Start Time:     Thu, 26 Dec 2019 11:13:21 +0100
Labels:         app=wac-core
                pod-template-hash=2637293240
Annotations:    <none>
Status:         Running
IP:             100.96.4.103
IPs:            <none>
Controlled By:  ReplicaSet/wac-core-6b7c6f7684
Containers:
wac-core:
    Container ID:   docker://dddc63acfc156b252af7958601f6245bfd27f43f744ab003d792d5794b2ef015
    Image:          registry.quobis.com/quobis/wac-core:19.2.0
    Image ID:       docker-pullable://registry.quobis.com/quobis/wac-core@sha256:d22b6..e664c372f4bd
    Ports:          8000/TCP, 5678/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
    Started:      Thu, 26 Dec 2019 11:14:34 +0100
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
    /home/nfs/filesharing from kube-nfs-pvc-filesharing (rw)
    [...]

Conditions:
Type              Status
Initialized       True
Ready             True
ContainersReady   True
PodScheduled      True
Volumes:
wac-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      wac-config
    Optional:  false
[...]

QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

Also the following command can be executed to show the logs for the selected pod:

$ kubectl logs <name-of-the-pod> --tail 50 -f

An example of the output for this command is as follows (truncated to fit the page width):

Thu Jan 02 2020  /wac/lib/core/io/wapi/Wapi.js] debug: onWAPIMessage 877551, /sessions/
Thu Jan 02 2020  /wac/lib/core/io/wapi/Wapi.js] debug: onWAPIMessageResponse 877551, /sessions/5e0e625ecc8296cb78101f79, PUT
Thu Jan 02 2020 /wac/lib/core/io/wapi/Wapi.js] debug: client disconnection with session id 5e0e625ecc8296cb78101f79
Fri Jan 03 2020 /wac/lib/core/Sessions.js] debug: Remove session due expiration, [ '5e0e625ecc8296cb78101f79' ]
Fri Jan 03 2020 /wac/lib/services/UserSettings.js] debug: user4@acme.com disconnected