Installation steps

This page explains the pre-requisites and installation steps to deploy Quobis communication platform v4.3 from scratch. The Quobis wac is software-only product, so there are not physical items to ship, everything can be downloaded from our servers.

The Quobis Communication Platform can be implemented on-prem, Private and Public Cloud environments. It is deployed as a cloud native, containerized application, orchestrated and managed by Kubernetes. It can be delivered on bare metal or virtualized infrastructure.

Kubernetes , also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.

The Quobis Communications Platform runs Kubernetes cluster with one or more master nodes and more than one worker node. All nodes are virtual machines (VMs) that are deployed into a host hardware -using an hypervisor- or hosted in a cloud platform service. The orchestrator cluster is configured first, master and workers nodes, and then the master node is used to deploy the Quobis communication platform as a containerized application orchestrated by Kubernetes.

The actual installation and deployment is doing by leveraing Ansible , an open source IT automation engine that automates provisioning, configuration management, application deployment, orchestration, and many other IT processes. Ansible works by connecting to the destination machines nodes and pushing out small programs, called modules, to them. These programs are written to be resource models of the desired state of the system. Ansible then executes these modules and removes them when finished. Ansible is agentless, which means the nodes it manages do not require any software to be installed on them. Ansible uses SSH protocol to connect to servers and run tasks. Once it has connected, Ansible transfers the modules required by your command or playbook to the remote machine(s) for execution.

../_images/ansible_1.png

The following chapters explain all the required steps to set up the platform.

Planning and pre-requisites

Choosing the deployment environment type

Quobis communication platform supports on-premise or cloud scenarios, from local and private deployment to global and multi location deployments depending on your organization needs.

  • On premises: deployments use your own network, hardware and hypervisor. You will need to consider that compromised data will be stored in your private infrastructure with restricted access, providing a higher level of security. On premise deployments will need an appropriate server where to host the VMs and the proper network configuration. If you already have a mature IT network with servers in the appropriate locations, this may be a suitable option. If you want to install on-premises, you will need to install a hypervisor on your servers to manage the deployment and configure the VMs to setup the Kubernetes cluster. Quobis communication platform currently supports VMware hypervisor only.

  • Cloud deployments: the supported public cloud provider as of today is Amazon Web Services (AWS). This option does not require hardware infrastructures, you just have to reach an agreement with the cloud service provider, supplying an easy and fast way to set up the environment to deploy the platform.

Kubernetes cluster sizing

A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node which host the Pods, that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.

The performance of the Quobis communication platform is determined by the number of nodes and the amount of resources of each node, in terms of CPU and memory. Depending on the number of nodes and the resources used by each of them the number of calls managed at time by the system may vary significantly. Additionally, Kubernetes provides to the cluster the capability to support High Availability, which means a seamless service when a node breaks down. This scenario needs, at least, 3 master nodes and 2 worker nodes in order to balance the workload between the active nodes.

You also must take into account the deployment location to improve the conference’s performance. The server should be located as close as possible to the main base or your organization due to the time issues. For instance, choose the AWS EU-region if your organization base is located in Europe.

Depending on the type of deployment, the number of current calls and the number of provisioned users, the number of nodes and their resources must be chosen accordingly.

  • For a high-availability deployment, 3 master nodes and at least 2 worker nodes are required

  • For a standalone deployent, 1 master node and at least 2 worker nodes are required

Hardware requirements

There are no specific hardware requirements as Quobis Communication Platform runs on top of virtual machines (VMs). Just make sure that the hardware sizing is appropiate for the VW specified in the next chapter.

Virtual machine requirements

On-premises deployment

  • The selected hypervisor must be VWMare

  • Supported OS: Debian 9 or higher, Ubuntu 16 or higher

  • Master nodes: at least 4 vCPU, 8 GB RAM, at least 100GB HDD

  • Worker nodes: at least 8 vCPU, 16 GB RAM, at least 100GB HDD

  • Storage: a Network File System (NFS) server is provided within the product to store all the persistent data used by the cluster.

Cloud deployment

  • The selected hypervisor must be the AWS virtualization platform and EC2 instances

  • Supported OS: Debian 9 or higher, Ubuntu 16 or higher

  • Master nodes: AWS t3a.medium instances, 2 vCPU, 4 GB RAM, EBS storage, up to 5Gbps bandwidth

  • Worker nodes: AWS c5.2xlarge instances, 8 vCPU, 16 GB RAM, EBS storage, up to 10Gbps bandwidth, up to 4750Mbps EBS throughput

  • Storage: Amazon Elastic File System (EFS) standard is the solution used to provide persistent storage in Amazon Web Services deployments.

Network requirements

Firewall rules and security considerations

Quobis Communication Platform needs communication between cluster nodes (master and workers) and also needs to expose different ports to allow voice and video calls. The following table summarizes the required network ports to configure:

Networking requirements

Req #

Origin

Destination

Destination IP

Destination ports

Protocol

Service

Encryption

Description

1

WebRTC clients

AS

AS IP

80,443

TCP

WSS

SSL

Application traffic

2

WebRTC clients

MS

IP GW-SFU

20000-29999

UDP

DTLS-SRTP

SSL

WebRTC media

3

WebRTC clients

MS

IP GW-TURN

3478

UDP

STUN

Unencrypted

TURN (NAT discovery)

4

WebRTC clients

MS

IP GW-TURN

443

TCP

COTURN

TLS

TURN (media relay)

There are other rules that need to be applied only when there is a SIP trunk connection with an external SIP entity such as a PBX, PSTN via SBC, etc. (rules 5,6,7) or when the deployment includes mobile clients (rules 8, 9 and 10) as per the following table:

Advanced networking requirements

Req #

Origin

Destination

Destination IP

Destination ports

Protocol

Service

Encryption

Description

5

SIP trunk

MS

IP GW-audio

5060

UDP

SIP

Unencrypted

SIP signalling

6

SIP trunk

MS

IP GW-audio

5061

TCP

SIP

TLS

SIP signaling

7

SIP trunk

MS

IP GW-audio

20000-29999

UPD

RTP/RTCP

Unencrypted

SIP audio (optionally encrypted using SRTP)

8

Quobis AS

AS

AS

5222

TCP

XMPP

TLS

Messaging from mobile apps

9

WebRTC client

Apple APNS

17.0.0.0/8

443

TCP

HTTP/TLS

SSL

Mobile push notifications

10

Quobis AS

Apple APNS

17.0.0.0/8

2195,2196

TCP

HTTP/TLS

SSL

Mobile push notifications

Note: rule #5 refers to the port used to exchange traffic between the SIP proxy and the core SIP network elements.

Note

Websocket connection timeouts: we minimize the amount of keepalive messages in order to optimize the connection setup time and the amount of traffic exchanged between our SDKs and the backend. To avoid connectivity problems, we recommend to setup traffic timeouts with high values (3600s) in the reverse proxies (e.g. F5 solutions) between the clients and the Quobis WAC cluster. This way we prevent websocket connections (RFC6455) from being abruptly closed.

Sizing ports for the SIP media connections

AS explained in the rule #7 above, a range of UDP ports need to be opened in order to allow media traffic from the SIP trunk connection. The following considerations must be taken into account:

  • Non-ParallelSIP scenarios: the number of ports can be obtained as AverageNumConcurrentConferences * 2 (one port for RTP, one port for RTCP)

  • ParallelSIP scenarios: the number of ports can be obtained as AverageNumConcurrentConferences * 4

Sizing ports for the WebRTC media connections

As explained in the rule #2 in the table above, a range of UDP ports need to be opened in order to allow WebRTC media traffic from/to the WebRTC endpoints. As explained in the media management section, Quobis WAC uses a SFU (Selective Forwarding Technology) to deliver the video flows to all the participants in a video conference. That means that the media servers will send an individual flow of the video published by each participant to each and one of the participants of the conference, simultaneously. Taking this fact into accoun, the number of used ports will depend on two variables:

  • the number of simultaneous users of the platform

  • the average number of participants at each conference

The formula used to calculate the ports used in a single conference can be computed from the following paramenters:

  • NumParticipants = Number of participants in the conference.

  • NumPortsPerConference = Total number of used UDP ports used in a conference.

  • NumPortsVideoSubscribers = Number of UDP ports used for video per conference to receive video from the SFU.

  • NumPortsVideoPublishers = Number of UDP ports used for video per conference to send video to the SFU.

  • NumPortsAudio = Number of UDP ports used for audio per conference. For audio the same port is used to publish and subscribe.

and applying this formula:

  • NumPortsVideoSubscribers = (NumParticipants - 1) * NumParticipants

  • NumPortsVideoPublishers = NumParticipants

  • NumPortsAudio = NumParticipants

  • NumPortsPerConference = NumPortsVideoSubscribers + NumPortsVideoPublishers + NumPortsAudio

Please note that this formula only applies if all the participants are publishing video and the rest of participants are subscribed to it. This represents the worst case in terms of port usage. If a user is neither subscribed nor publishing video those ports should not be considered in the formula. Please note that this refers to the number of open ports per media server, if your architecture includes several media servers then the number of ports must be distributed amongst them. As a reference, in the table below you can see the correspondence between number of participants in a conference and the number of ports used:

Number of ports used in a conference room

Participants in the conference

Ports used in the conference

2

6

3

12

4

20

8

72

16

272

32

1056

64

4160

As it can clearly seen, the figures are exponential and for conferences above 32 participants, with all of them publishing video at the same time, the number of ports is huge and so will be the used bandwidth. We can perform an estimation to calculate the number of open ports that we need which takes into account the number of simultaneous users using the service and the average number of participants at a conference. This will give a good estimation of the number of ports which need to be configured to avoid any service disruption if the limit is reached. So to calculate the right number of UDP ports we need to open please consider the following definitions:

  • SimultaneousUsers = number of users connected to a conference at the same time (do not confuse with the number of registered users).

  • AverageNumConferenceParticipants = average number of participants in a conference.

  • AverageNumConferences = average number of conferences.

  • TotalPorts = Total number of ports required by the system.

  • NumPortsPerConference = number of ports per conference calculated using the formula previously defined.

and compute the following formula:

  • AverageNumConferences = ceil (SimultaneousUsers / AverageNumConferenceParticipants)

  • TotalPorts = NumPortsPerConference(AverageNumConferenceParticipants) * AverageNumConferences

As a reference, the table below shows the number of estimated ports required depending on AverageNumConferences and SimultaneousUsers (calculations per media server):

Number of ports used in a conference room

SimultaneousUsers

Average Num Conference Participants

Total ports used

100

4

500

100

8

900

500

4

2500

500

8

4500

1000

4

5000

1000

8

9000

The figures above should give a conservative value. However if a shortage of ports is detected in high load conditions the range must be increased. If the use of such a wide range of ports can mean a problem then you should consider the use of a TURN server which can multiplex the media coming and going from/to different endpoints on a single UDP port. However using UDP would be the preferred option in terms of simplicity and performance.

Sizing of media servers required according to traffic load

In summary, according to the selected MS worker nodes referred above, each of them must open the indicated number of ports depending on the audio codec in order to cover the main available scenarios:

  • 2000 opened port per SFU + MCU (Opus) -> 200 cc per Media Server

  • 4000 opened port per SFU + MCU (G711)-> 400 cc per Media Server

Bandwith requirements

Administrators need to take into account how much bandwith is going to be required to cope with the application traffic, specially for video applicatons. This calculation is directly related with the codecs in use and the quality allowed for each video stream. Please check the media quality section to understand the codec choice and how much badwith will be required in order to provide a good quality of service.

Additional requirements

There are other requirements that need to be fulfilled before starting the installation process:

  • Remote access must be guaranteed to master and worker nodes during the installation process.

  • Internet access is needed in all the nodes during the installation procedure in order to download external dependencies. Once Quobis Communication Platform is installed and running, the internet connection can be suppressed due to security issues.

  • Quobis Registry credentials (username and token) is needed to download the docker images during installation and update procedures from registry.quobis.com

  • Gitlab credentials to download the Ansible repository at https://gitlab.com/quobis/devops/sippo-k8s

  • Domain and valid certificates: a public domain and the SSL certificates have to be available to access the Quobis Communication Platform from a browser.

  • Public IP addresses: Each Media Server (MS) worker needs a public IP address in order to manage video and audio of WebRTC calls. The IP traffic can be forwarded from an external firewall.

Sizing of media server nodes

The Quobis platform deals with video and audio in different ways, as explained in detail in the media management section:

  • The audio is managed in a MCU (Media Control Unit), also referred as audiomixer. The audio coming from all the participants is mixed here.

  • The video is managed in an SFU (Selective Forwarding Unit). The video coming from all the participants is relayed to the rest of participants.

There are two pods which consume CPU cycles in the media server nodes:

  • Audiomixer: it consumes CPU mainly in two tasks:

    • Opus transcoding: the media coming from the WebRTC endpoints (browsers and mobile SDKs) needs to be transcoded before being mixed. Opus transcoding is a CPU-intensive task and the one which porcentually consumes more CPU cycles. If PCMU/PCMA (G711) is used instead of Opus, the CPU consumption is much lower.

    • Audiomixing: the mixing of audio, once transcoded, also requires CPU real-time processing.

  • SFU: it consumes CPU to keep the media connections open between browsers/mobile apps and the SFU. These connections are called PeerConnections in WebRTC terminology. The number of open PeerConnections increases linearly with the number of simultaneous media sessions and pseudo-exponentially with the number of participants in each session.

Taking into account the factors above and an average distribution of users in conferences with different numbers of participants, we can consider the following sizing rules:

  • One node (8vCPU) for each 200cc using Opus

  • One node (8vCPU) for each 400cc using PCMU/PCMA.

Sizing of application server nodes

Besides the media processing, the CPU consumption of the application pods must also be considered. The CPU processing will depend on several factors:

  • the number of simultaneous active sessions

  • the number of concurrent calls.

  • the number of calls per second.

  • the use of the presence. Presence can be an issued for example in large organizations where each user receives an update with any presence change. So a huge average number of contacts per user will lead to a higher CPU utilization.

  • provisioning processes through the REST API may lead to peak CPU usage.

  • Sessions from browser-based apps remain active while the web app is open.

  • Sessions from mobile apps keep the session active only while the app is in foreground.

Taking into account the factors above we can consider the following sizing rules:

  • one 8vCPU node per 1,000 simultaneous sessions when presence is enabled.

  • one 8vCPU node per 2,000 simultaneous sessions when presence is not enabled.

Please consider the rules above as references, since the load may differ depending on the use cases. An active monitoring of the platform may is quite userful to adjust the number of nodes.

Installing and configuring Ansible

Quobis Communication platform installation requires a workstation from where all Ansible and other script commands are issued. The node where Ansible is installed could be part of the cluster or a separate dedicated machine. The idea behind this is to have a unique element where the support access is focused. No direct access to the working cluster is required, just access to this working station.

Check the following requirements on the machine where you are going to run the Ansible installer:

  • Supported OS: Debian 9 or higher and Ubuntu 16.04 or higher

  • At least 2GB of RAM

  • Ansible version 2.9.6 or higher

  • Ensure that all the target machines have network visibility with the Ansible script installer and are reachable via a SSH connection.

  • Includes sudo and an accessible user included on sudoers: sudo usermod -aG sudo <host_username>

Installing Ansible on Ubuntu

To configure the PPA on your machine and install Ansible run these commands:

1$ sudo apt update
2$ sudo apt install software-properties-common
3$ sudo apt-add-repository --yes --update ppa:ansible/ansible
4$ sudo apt install ansible

Note: On older Ubuntu distributions, “software-properties-common” is called “python-software-properties”. You may want to use apt-get instead of apt in older versions. Also, be aware that only newer distributions (in other words, 18.04, 18.10, and so on) have a -u or –update flag, so adjust your script accordingly.

Installing Ansible on Debian

Debian users may leverage the same source as the Ubuntu PPA.

Add the following line to /etc/apt/sources.list:

deb http://ppa.launchpad.net/ansible/ansible/ubuntu trusty main

Then run these commands:

1 $ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367
2 $ sudo apt update
3 $ sudo apt install ansible

Note: This method has been verified with the Trusty sources in Debian Jessie and Stretch but may not be supported in earlier versions. You may want to use apt-get instead of apt in older versions. Further documentation about Ansible installation can be found in the Ansible website.

Preparing the environment

1.- Download the repository from Gitlab in the same host where you have installed Ansible. Please select the version that applies, recommended one is always the latest GA:

git clone --depth 1 --branch 2.22.0     https://gitlab.com/quobis/devops/sippo-k8s.git  # use this branch for v4.4.0

2.- Create a new inventory, you can directly copy the “default” inventory to take it as reference.

cp -r sippo-k8s/ansible/inventory/default sippo-k8s/ansible/inventory/<your_company_name>

3.- Remove the default passwords from the vault inventory in order to add the new ones (replace <your_inventory_name> with your actual inventory name):

cat /dev/null > sippo-k8s/ansible/inventory/<your_inventory_name>/group_vars/all/vault.yml

4.- Set up the cluster hosts’s IPs and credential to allow Ansible to execute the automated tasks. You will find a YAML file with all entries enclosed in brackets [] that are referred to as hosts. On the Kubernetes deployment you will find these five types of hosts:

  • [master] The machines where the kubectl will run. All the management is done from these machines, they are the control panel of the deployment. Three instances are required when deploying on VMware.

  • [node] Working nodes, the ones that do the job. They handle traffic, run services, etc. From a minimum of 3 up to your traffic needs.

  • [nginx] If your system requires an external entry point, this is it (only for on-prem, AWS uses the ELB service)

  • [nfs] Required only on onPremise deployments, it will handle a distributed storage system (AWS uses the EFS service)

  • [turn] Machine where the TURN server is going to be deployed.

For each host you must indicate:

  • The node’s IP address for SSH connection (reachable from Ansible execution machine).

  • ansible_user user name used by Ansible to log into the remote machine.

  • ansible_ssh_pass user pass used by Ansible to log into the remote machine.

  • ansible_sudo_pass sudo pass needed by the ansible_user to log as administrator on the remote machine

Note

Use single quotes to enclose values that include spaces, such as passwords or file locations

This is the content of the sippo-k8s/ansible/inventory/<your_name_company>/hosts, just edit it with VIM or any other text editor of your choice:

 1 [master]
 2 192.158.1.38 ansible_user=<user> ansible_ssh_pass=<password> ansible_sudo_pass=<password> ansible_ssh_private_key_file=/home/user/.ssh/key.pem ssh_key_file='~/.ssh/id_rsa'
 3
 4 [node]
 5 192.158.1.39 ansible_user=<user> ansible_ssh_pass=<password> ansible_sudo_pass=<password>
 6 192.158.1.40 ansible_user=<user> ansible_ssh_pass=<password> ansible_sudo_pass=<password>
 7
 8 [nginx]
 9 192.158.1.39 ansible_user=<user> ansible_ssh_pass=<password> ansible_sudo_pass=<password>
10
11 [nfs]
12 192.158.1.38 ansible_user=<user> ansible_ssh_pass=<password> ansible_sudo_pass=<password>
13
14 [turn]
15 192.158.1.40 ansible_user=<user> ansible_ssh_pass=<password> ansible_sudo_pass=<password>

5.- Certificates installation

Now you must provide instructions to locate the certificates used to expose within your application. Prepare them before starting the deployment. We created a folder on the Ansible installer that will help you to have all ready before start.

Save in sippo-k8s/ansible/ssl/:

  • certName.crt, certName.key files: public and private certificates to be used on the NGINX public interface, as this is the main entry point.

  • APIcertName.crt, APIcertName.key files: public and private certificates to be used on the NGINX internal interface. Any self signed certificates will do the work. The sippo-server component uses it on the wiface1 interface.

Then, save in path kubernetes/prosody/configmaps/conf/certs.d/ the following certificates

  • apn.cer, apn.crt.pem, apn.key and apn.key.pem files: public valid certificate for Apple push services.

  • voip_services.cer, voip_services.cer.pem files: public valid certificate for Apple VoIP services.

6.- Set up the Quobis Communication Platform variables file in order to configure the deployment according to our needs. You can use VIM or any other text editor of your choice (replace <your_company_name> for the actual name that you have previously setup in step #2=:

Password management

Passwords are stored and encrypted in the group_vars/all/vault.yml file. Some of them must be introduced in this file manually. Additionally, the remaining passwords can be included in this file using the following command to generate further commands to paste the generated keys into the vault:

ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "passwords-deploy"

Which should output something similar to:

ansible-vault encrypt_string --vault-id <inventory name>@prompt 'HMdM2wUb2otnyuE5XagpStLgoTxJCMXgdZs25Tu4S8xW0IKIkps9oRhdNmlZbxkx' --name 'grafanaPassword' >> <inventory name>/group_vars/all/vault.yml

The vault password required in the console will be used to encrypt all passwords. Paste all the generated commands into the terminal to copy the passwords into your vault. This is the Password format example from the vault.yml file:

grafanaPassword: !vault |
$ANSIBLE_VAULT;1.1;AES256383463323231333965636435346436646537316261636462333231313865663037303166346439356266326230386662393938346664313763396366313734640a393532613962363965646362653733323463346465366637636238393430643635343230313035383362633661653363336238313333373438663762326532610a3831353035343139373230613731333339653464303233333334623836333935

Additionally, do not forget to include the dockerPassword with the registry credential provided by quobis:

ansible-vault encrypt_string --vault-id <inventory name>@prompt '<registry-token>' --name 'dockerPassword' >> <inventory name>/group_vars/all/vault.yml

Ansible inventory configuration parameters

Now it’s time to configure the Ansible playbook. Playbooks are the files where Ansible code is written. Playbooks are written in YAML format, which stands for “Yet Another Markup Language” and its defined here. Playbooks are one of the core features of Ansible and tell Ansible what step the user wants to execute on a particular machine in a repeatable and re-usable way, well suited to deploying complex applications. They are like a to-do list for Ansible that contains a list of tasks.

As an example, these are the first lines of the default playbook main.yml. You need to change the value of some parameters according to your type of deployment. The following tables explain which ones of these values must be changed and how:

vim sippo-k8s/ansible/inventory/<your_name_company>/group_vars/all/main.yml

#
# Deployment variables
#
registryServer: registry.quobis.com
dockerUsername: username
dockerEmail: username@quobis.com


# Type of infrastructure in deployment. You can choose between AWS or onprem
# You can select if your infrastructure will have persistent data (databases, upload files, etc) or not

infrastructure: 'onprem'
persistent: true
orchestrator: "k8s"
maxStorageQuota: '60Gi' # Select the maximum disk space to keep databases, upload files, list of domains, etc
syncDate: true          # Select if containers synchronization will be done from host machines
timeZone: /usr/share/zoneinfo/Europe/Madrid # If syncDate is true the timezone selected from host machines to be synchronize

... more parameters below ...

Registry credentials

Docker credentials

Parameter

Value

Comment

registryServer

registry.quobis.com

Do not change

dockerUsername

(blank)

Choosen username

dockerEmail

(blank)

Choosen email

Deployment options

Configuration parameters

Parameter

Value

Comment

infrastructure

‘omprem’

Type of infrastructure in deployment. Options: ‘amazon’ or ‘onprem’

persistent

true

You can select if your infrastructure will have persistent data (databases, upload files, etc)

orchestrator

“k8s”

You can select the type of orchestrator to deploy de cluster. Options: k8s (production) and k3s (lab only).

maxStorageQuota

‘60Gi’

Maximum disk space to keep databases, shared files, list of domains in messaging server and backups.

syncDate

true

True if containers’ synchronization will be done from host machines

timeZone

/usr/share/zoneinfo/Europe/Madrid

If syncDate is true, the timezone selected from host machines to be synchronized

Container versions

This is the list of the version of each container for version 4.4.0:

Component versions for v4.4

Element

Version

teal

1.2.0

sippo server

22.0.3

qss

4.18.0

oauth2-proxy

1.16.3

xmpp-server

2.4.2

janus-dispatcher

1.5.3

erebus

1.6.2

janus-wrapper

1.18.3

recording-watchdog

3.5.1

sippoSDK-JS

29.2.0

sippoSDK-Android

0.14.0

sippoSDK-iOS

3.0.0

sippoSDK-cpp

0.1.0

webphone

5.27.2

c2c

1.1.0

sippo-manager

stable-2.2.2

sippo-maintainer

1.2.7-kubernetes1.15.3

sippo-exporter

1.6.0

sippo-k8s

2.22.0

kapi

1.5.2

And this is the list of the 3rd party components:

Third-party components version for v4.4

3rd party element

Version

NodeJS runtime

8.0

message-broker (RabbitMQ)

3.7

database (MongoDB)

4.2

reverse-proxy (Nginx)

1.18.0

cluster-ingress (Nginx-ingress)

0.33.0

chat-database (PostgresSQL)

12-alpine

audiomixer (Asterisk)

18.2.0

sfu (Janus)

0.9.5

sip-proxy (Kamailio)

5.4.4

sip-proxy-db (MySQL)

5.7

turn-server (CoTURN)

4.5.1.1

monitoring.ui (Grafana)

7.3.7

log-database (Loki)

1.4.0

monitoring-database (Prometheus)

v2.22.2

Please note that the format changes where we can have repeated values, such as in the case of the parameter “taggedMonitoringMachines”. In this case, the value name start with a hyphen (-):

tagged: false

taggedSippoMachines:
    - name: kubernetes-node1
    - name: kubernetes-node2

taggedMonitoringMachines:
    - name: kubernetes-node1

Nginx

This configuration is only required for on-premises deployments.

Parameter

Value

Comment

serverName

<server_name>

nginx_key

<SSL_certificate_name>.key

nginx_crt

<SSL_certificate_name>.crt

nginx_sapi_key

<SSL_certificate_name>.key

New in this version v4.3

nginx_sapi_crt

<SSL_certificate_name>.crt

New in this version v4.3

ingressListeningPort

32639

Port where http_proxy is listening (internally). No need to expose it externally

WAC

Parameter

Value

Comment

defaultSippoServerAuth

true

If true, enable default WAC oAuth authentication

dev

false

If true, include static DNS server name entry in WAC container

mailHost

192.158.1.39

IP of the email server

mailPort

25

Por of the email server

mailFrom

change@me.com

Mail that Quobis WAC uses to send emails

meetingLinkHost

https://<server_name>

meetingPath

m

Meetings URL path, defaults to https://servername/m/meeting-id

pstnExtension

PSTN-reachable dial-in number to join a Quobis WAC meeting in E.164 format

pushActivated

false

If true, mobile push notifications in mobile apps are activated

startUpTimeout

15

WAC start up timeout in seconds.

Quobis collaborator

The following parameters are only required when installing Quobis collaborator. If you are deployment the system and don’t plan to deploy Quobis collaborator, you can leave them blank.

Parameter

Value

Comment

customWebphoneStaticsSounds

false

Custom Sounds

customWebphoneStaticsI18n

false

Custom icons

customWebphoneStaticsFonts

false

Custom fonts

customWebphoneStaticsImages

false

Custom image

showContentInfo

true

Show instructions to the end user about how to use Quobis collaborator

webphoneDocumentTitle

<name of your system>

Select the HTML title of the document

webphoneDefaultDomain

quobis

Allows to enforce a domain in the login process. Cannot be left blank as it’s used also for anonymous users

webphoneOauthRedirectUri

“{{ serverName }}/o/adfs/callback”

URL to redirect the oauth login to oauth server when using oauth2proxy

QSS

The following table lists the configuration parameters of the signaling server (QSS):

Parameter

Value

Comment

activateParallelSip

false

When true, parallel SIP routing is activated

voiceMailExtension

999

VoiceMail extension number

O2P

Parameter

Value

Comment

clientID

(blank)

External authentication client ID

XMPP server

The XMPP server can either use internal storage or an external database (recommended). If xmppDatabase is set to true. PostgresSQL will be used and a valid version must also be entered.

Parameter

Value

Comment

maxHistoryMessages

50

xmppDatabaseUsername

prosody_user

Username of the XMPP database. Does not need to be changed otherwise changes are made in the XMPP server

xmppDatabaseName

prosody

Name of the XMPP database. Does not need to be changed otherwise changes are made in the XMPP server

xmppTLS

false

Enable XMPP messaging over TLS

postgreSqlVersion

“12.1-alpine”

Version of the PostgreSQL version that the XMPP server is using. Does not need to be changed otherwise changes are made in the XMPP server

Ingress

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Includes two network protection related features: Anti-DoS protection and CORS.

Parameter

Value

Comment

assigToNode

true

Deploy nginx-ingress in nodeNginxIngress

nodeNginxIngress

ops-ci-node2

commonName

‘*.quobis.com’

Must go between quotes

wildCard

true

If true, commonName must have format of wildcard certificate

ddosProtection

false

When true, a DDOS protection is activated

rps

5

If DDOS protection is activated, maximum request per second allowed to all endpoints

rpsHigh

20

If DDOS protection, maximum request per second to Quobis collaborator endpoints

corsDomains

“domain1.com|domain2.net”

List of domains allowed with CORS, separated by a pipe

enableTeal

false

Teal deployment blue/green

cookieName

canary

Certificates

These parameter are required only when using the “Let’s encrypt” https://www.letsencrypt.org/. Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. It is a service provided by the Internet Security Research Group (ISRG).

Parameter

Value

Comment

letsEncript

false

awsRegionLetsEcript

<amazon_regin>

provider

(blank)

acmeDomains

(blank)

letsEncriptEmail

(blank)

Email to be used in the LetsEncrypt account

Media server

Parameter

Value

Comment

codec

“vp8”

sfuWrapper: labelWrapper

sfu1

Label for the wrapper, at least one is needed. Defaults to “sfu1”

sfuWrapper: ipJanus

10.1.21.88

IP of the Janus SFU

sfuWrapper: name

audiomixer

sfuWrapper: node

ms1

asteriskVersion

18.2.0

Version of the Asterisk service used as audiomixer.

confbridgeTimeout

28000

janusVersion

0.9.5

mediaServersInK8s

falase

Deploy Asterisk, Janus and Kamailio in K8s cluster

webRtcIp

20000

Exposed Janus public IP to media stream

webRtcStartPort

20499

Exposed Janus start port to media stream

webRtcEndPort

21000

Exposed Janus end port to media stream

taggedMediaMachines.name

kubernetes-janus

taggedMediaMachines.tag

ms1

SIP integration

Parameter

Value

Comment

announceOnlyUser

false

Announcement if the user is alone in the conference room

rtpIp

10.1.21.93

Audiomixer external media IP

rtpStartPort

10000

Audiomixer start port for RTP port range

rtpEndPort

10249

Audiomixer start port for RTP port range

kamailioVersion

5.4.4

Kamailio and registry image version

sip-proxy.name

sip-proxy-1

Kamailio deployment pod name

sip-proxy.node

ms1

Kamailio deployment node

sip-proxy.sipIp

10.1.21.93

Kamailio deployment IP

sip-proxy.sipIpPrivate

10.1.21.93

Kamailio deployment private SIP IP

sipIp

10.1.21.93

Kamailio external SIP IP

sipPort

5060

Kamailio external SIP port

pbxIp

10.1.3.2

External endpoint SIP IP

pbxPort

10.1.3.2

Customer endpoint SIP port

pbxDomain

R-URI, FROM and TO domain to requests

sipRegistration

false

Send REGISTER and authenticated REGISTER requests from Kamailio to PBX

sipAuthentication

false

Authenticate INVITE and REFER from external PBX to Kamailio

MWI

false

Message Waiting Indicator

multiLocation

false

location.url:

(blank)

location.group

(blank)

An example of a High-Availability SIP proxy configuration follows:

1sipProxy:
2 - name: sip-proxy-1
3   node: ms1
4   sipIp: 10.1.21.93
5   sipIpPrivate: 10.1.21.93
6 - name: sip-proxy-2
7   node: ms1
8   sipIp: 10.1.21.96
9   sipIpPrivate: 10.1.21.96

TURN server

The TURN server is installaed as a service, not in the Kubernetes cluster.

Parameter

Value

Comment

turnServerEnabled

True

When true, the TURN server is installed

turnDomain

(blank)

TURN server public domain

turnInternalIP

(blank)

Internal media server IP

turnPublicIP

(blank)

TURN server media server IP

turnPort

443

Port to be used by the TURN server

Recording

enableRecording

False

enableEncryption

False

File publickey.acs must be included in kubernetes/recording

encryptionMail

(blank)

recordingVersion

3.4.0

recordingType

none

Must be one of [none, video, audio, all] as explained in the recording section

Kubernetes API

Parameter

Value

Comment

kapiVersion

1.5.2

enableKapiui

True

Teal service

Please note that the password for this configuration must be defined in the vault, not here.

Parameter

Value

Comment

tealVersion

1.2.0

tealDatabaseName

teal_prod

tealDatabaseUsername

teal

Storage

The first two parameters applies for on-prem installation only, the following ones apply for the AWS configuration only.

Parameter

Value

Comment

nfsServer.ip

10.1.21.27

nfsPath

“/home/nfs”

efsServer.efsFileSystemId

fs-XXXXXXXXXXXXXXX

efsServer.awsRegion

eu-west-3

efsProvisionerVersion

v2.4.0

efsPath

“/home/nfs”

Database

Parameter

Value

Comment

mongoAtlas

False

If true, the Mongo Atlas service is used instead of a local one https://www.mongodb.com/es/cloud/atlas

enableMongoOperator

true

If true, the database is deployed inside the cluster

noDnsResolution

false

Database hosted in external servers

replica

True

For multiple instances of the database deployment set replica:true and use a comma-separated array on the databaseUrl parameter

databaseVersion

4.2.6

mongoAtlasUrl

“mongodb+srv://user:{{ mongoAtlasPassword }}@qa0-lo8k4.mongodb.net”

databaseUrl

“quobis:{{ mongoPassword }}@database-mongo-0.database-mongo-svc.{{ namespace }}.svc.cluster.local

If the hostname of the machine do not resolve via DNS, it is needed to add it in each container, so set to true

databases.hostname

(blank)

Hostname of the Mongo database (eg: mongo1.internal.quobis.com)

databases.ip

(blank)

Hostname of the Mongo database (eg: mongo1.internal.quobis.com)

Cluster maintainer

Parameter

Value

Comment

maintainerVersion

1.2.6-rc1-kubernetes1.15.3

deleteConferences

True

Delete conferences with state finished and older than CALL_HISTORY_DAYS days

callHistoryDays

1

Used to delete old conferences (see line above)

timeToPurgeChats

6

Purge prosody chats older than {{ timeToPurgeChats }} in months

timeToPurgeAnonymousUser

8

Purge anonymous users older than {{ timeToPurgeAnonymousUsers }} in hours

scheduleCron

0 * * * *

Must be enclosed between quotes

enableBackup

True

enableDisasterRecovery

False

backupFolder

(blank)

Name of backupFolder available to be used in the NFS or EFS storage

Quobis manager

Parameter

Value

Comment

grafanaHost

(blank)

grafanaUser

(blank)

grafanaVersion

7.3.7

lokiVersion

1.4.0

prometheusVersion

v2.22.2

sippoManagerVersion

stable-2.1.2

smanUsername

sman@quobis

It’s advisable to create a different user (admin role) for Quobis Manager to avoid sharing sippo-server admin credentials

sippoExporter

1.6.0

support_sql_email

(blank)

DBDROPDAYS

7

Homer database rotate period

hepServerIp

(blank)

HEP host IP

hepNode

(blank)

HEP host name

Launching the installation process

Once all the required configuration variables are set in the playbook, it’s time to let Ansible install the software into the target machines according with our configuration. This is achieved by using the ansible-playbook command, which runs the Ansible playbooks, executing the defined tasks on the targeted hosts according to a set of tags.

Kubernetes based deployment involves the use of tags in the ansible-playbook. A tag is an attribute that you can set to an Ansible structure (plays, roles, tasks), and then when you run a playbook you can use –tags or –skip-tags to execute a subset of tasks. In other words, tags are a way of telling Ansible which tasks it should perform and which not. In our case, the tag name explain what it does so it’s quite straightforward to understand what you are doing.

Please note that you don’t need to install all the tags listed below, as your deployment might not require all of them. For example, if you don’t plan to deploy a SIP integration, you can skip it.

Install dependencies in the cluster nodes:

$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "pre-installation"

Install the Kubernetes layer and launch the cluster (only needed in onPremise deployments):

$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "deploy-cluster"

Updaload kubernetes folder to remote master node:

$ ansible-playbook -i inventory/<inventory name> main.yml --tags "upload-files"

For on-prem deployments, install NFS service:

$ ansible-playbook -i inventory/<inventory name> main.yml --tags "nfs-install"
$ ansible-playbook -i inventory/<inventory name> main.yml --tags "nfs-deploy"

For AWS deployments, install the EFS services instead:

$ ansible-playbook -i inventory/<inventory name> main.yml --tags "efs-deploy"

For on-prem deployments, install Nginx service:

$ ansible-playbook -i inventory/<inventory name> main.yml --tags "nginx-install"
$ ansible-playbook -i inventory/<inventory name> main.yml --tags "nginx-deploy"

Create the PV and PVC needed to the clster:

$ ansible-playbook -i inventory/<inventory name> main.yml --tags "persistent-deploy"

If the database is included in the Kubernetes cluster, you need to install the following tag as well:

$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "database-cluster-operator-deploy"

Deploy the core services required in any deployment:

 1 $ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "message-broker-deploy"
 2 $ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "erebus-deploy"
 3 $ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "storage-deploy"
 4 $ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "qss-deploy"
 5 $ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "sfu-dispatcher-deploy"
 6 $ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "sfu-wrapper-deploy"
 7 $ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "postgresql-deploy"
 8 $ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "xmpp-server-deploy"
 9 $ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "sippo-server-deploy"
10 $ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "sfu-deploy"
11 $ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "audiomixer-deploy"

Deploy Quobis collaborator, if required:

$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "webphone-deploy"

Deploy Nginx-ingress to have access to the cluster:

$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "nginx-ingress-deploy"

Deploy the cluster maintainer:

$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "sippo-maintainer-persistent-deploy"

Just in those scenarios where SIP integration is needed, you must deploy the SIP-Proxy.

$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "sip-proxy-persistent-deploy"
      $ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "sip-proxy-deploy"

Recording: depending on the type of storage, you will need to deploy the EFS or NFS persistent volumes to persist the recordings.

$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "recording-nfs-deploy"
$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "recording-efs-deploy"

Recording deployment

$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "recording-deploy"

Kubernetes API deployment

$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "kapi-deploy"

Server monitoring deployment

$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "monitoring-namespace-deploy"
$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "monitoring-persistent-deploy"
$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "prometheus-deploy"
$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "loki-deploy"
$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "promtail-deploy"
$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "sippo-exporter-deploy"
$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "node-exporter-deploy"
$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "grafana-deploy"

Database monitoring deployment

$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "support-sql-deploy"

SIP monitoring deployment

$ ansible-playbook -i inventory/<inventory name> --vault-id <inventory name>@prompt main.yml --tags "homer-deploy"

Checking the status of the platform

Once all previous steps have been completed and all components have been installed, it is necessary to make sure that each individual process taking part in the deployment is running.

On a kubernetes based deployment, these processes correspond with pods running inside the kubernetes cluster. Pods are the smallest, most basic deployable objects in Kubernetes. A pod represents a single instance of a running process in your cluster. Pods contain one or more containers, such as Docker containers. When a pod runs multiple containers, the containers are managed as a single entity and share the pod’s resources.

To get the list of deployed pods, kubectl must be used. The kubectl command line tool lets you control a Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. From a technical point of view, kubectl is a client for the Kubernetes REST API. In our case, just run the following command into a terminal:

$ kubectl get pods

This should output something similar to this:

NAME                                                           READY   STATUS    RESTARTS   AGE
database-node-1-rc-8zcxp                                       1/1     Running   3          45d
database-node-2-rc-62ncz                                       1/1     Running   5          17d
database-node-3-rc-qmzvj                                       1/1     Running   2          45d
efs-provisioner-database-node-1-poc-688bb4786d-p6s2g           1/1     Running   9          5d
efs-provisioner-database-node-2-poc-6bd58d655-wmllz            1/1     Running   3          45d
efs-provisioner-database-node-3-poc-879d459db-sw25h            1/1     Running   18         12d
efs-provisioner-filesharing-poc-7bcfd6d848-bzwms               1/1     Running   1          45d
efs-provisioner-filestorage-poc-568899d484-bt2tz               1/1     Running   11         5d
efs-provisioner-filestorage-xmpp-server-poc-6fb5b75b9f-pnmkg   1/1     Running   0          8h
efs-provisioner-hosts-xmpp-server-poc-cfb496747-ddspb          1/1     Running   11         6d
message-broker-79944c564-6wkcr                                 1/1     Running   0          8h
qss-auth-http-5d697d678f-n67tz                                 1/1     Running   0          8h
qss-calltransfer-basic-865c4bc966-6kjd4                        1/1     Running   2          5d
qss-conference-state-95f4d9c78-mwlqx                           1/1     Running   0          8h
qss-invites-rooms-c68576695-f4hxf                              1/1     Running   1          5d
qss-io-websockets-547777bcfc-8lcsd                             1/1     Running   0          8h
qss-log-conference-76d74c9f75-5f2q5                            1/1     Running   1          5d
qss-meeting-basic-8569bc6f8-9flw7                              1/1     Running   0          8h
qss-peer-jt-597fdf9849-zd4hg                                   1/1     Running   1          5d
qss-registry-authenticated-56b6c68fc9-c84mp                    1/1     Running   0          8h
qss-resolver-wac-6f74dd4864-zsb8w                              1/1     Running   3          5d
qss-rooms-basic-d68994894-qgszk                                1/1     Running   0          8h
qss-trunk-asterisk-746bf4f6c7-brbwx                            1/1     Running   1          5d
qss-watchdog-invites-6c7784478-8bx26                           1/1     Running   0          8h
qss-watchdog-registry-7b8bcdb8f7-xnf5d                         1/1     Running   1          5d
sfu-dispatcher-69875b884f-5rkkd                                1/1     Running   0          12d
sfu-wrapper-sfu1-d95b8b8f4-4djv8                               1/1     Running   0          8h
sippo-server-6b7c6f7684-lsxp5                                  1/1     Running   3          5d
sippo-storage-65b9fb755b-8jjsl                                 1/1     Running   0          18d
sippo-storage-65b9fb755b-gs2xb                                 1/1     Running   0          8h
webphone-angular-587d965c9c-mv5p9                              1/1     Running   0          8h
xmpp-server-797f47dc67-cv5qf                                   1/1     Running   0          6d

On this list you should be able to see at least one pod per role configured in the ansible script and all pods should have the STATUS column set to Running. If some element is missing you should consider restarting the installation process making sure all steps have been followed correctly.

If one or more pods have a status different from Running this means those services have encountered an error. To to show details of a specific pod and related resources, on a terminal, run the following command:

$ kubectl describe pod <name-of-the-pod>

The following is an example output for a describe of a sippo-server pod:

Name:           sippo-server-6b7c6f7684-lsxp5
Namespace:      poc
Priority:       0
Node:           ip-172-32-33-233.eu-west-1.compute.internal/172.32.33.233
Start Time:     Thu, 26 Dec 2019 11:13:21 +0100
Labels:         app=sippo-server
                pod-template-hash=2637293240
Annotations:    <none>
Status:         Running
IP:             100.96.4.103
IPs:            <none>
Controlled By:  ReplicaSet/sippo-server-6b7c6f7684
Containers:
sippo-server:
    Container ID:   docker://dddc63acfc156b252af7958601f6245bfd27f43f744ab003d792d5794b2ef015
    Image:          registry.quobis.com/quobis/sippo-server:19.2.0
    Image ID:       docker-pullable://registry.quobis.com/quobis/sippo-server@sha256:db316db5c79033fb7709f022b6c0d3dbe3c1ae6e938861c5c3cfe664c372f4bd
    Ports:          8000/TCP, 5678/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
    Started:      Thu, 26 Dec 2019 11:14:34 +0100
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
    /home/nfs/filesharing from kube-nfs-pvc-filesharing (rw)
    [...]

Conditions:
Type              Status
Initialized       True
Ready             True
ContainersReady   True
PodScheduled      True
Volumes:
wac-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      wac-config
    Optional:  false
[...]

QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

Also the following command can be executed to show the logs for the selected pod:

$ kubectl logs <name-of-the-pod> --tail 50 -f

An example of the output for this command is as follows:

Thu Jan 02 2020 23:57:02 GMT+0000 (UTC) [761cf75f-4ddc-40dc-beea-ffc8bc23c08e~hr4VcVoYQIEToJSBAAIf~5e0e625ecc8296cb78101f79~5dc5775c5b8a2934b2e39704#node@sippo-server-6b7c6f7684-xxqwh<1>:/wac/lib/core/io/wapi/Wapi.js] debug: onWAPIMessage 877551, /sessions/5e0e625ecc8296cb78101f79, PUT
Thu Jan 02 2020 23:57:02 GMT+0000 (UTC) [node@sippo-server-6b7c6f7684-lsxp5<1>:/wac/lib/core/Sessions.js] debug: 100.96.4.22 5dc5775c5b8a2934b2e39704,  update session 5e0e625ecc8296cb78101f79
Thu Jan 02 2020 23:57:02 GMT+0000 (UTC) [761cf75f-4ddc-40dc-beea-ffc8bc23c08e~hr4VcVoYQIEToJSBAAIf~5e0e625ecc8296cb78101f79~5dc5775c5b8a2934b2e39704#node@sippo-server-6b7c6f7684-xxqwh<1>:/wac/lib/core/io/wapi/Wapi.js] debug: onWAPIMessageResponse 877551, /sessions/5e0e625ecc8296cb78101f79, PUT
Thu Jan 02 2020 23:57:13 GMT+0000 (UTC) [node@sippo-server-6b7c6f7684-lsxp5<1>:/wac/lib/core/io/wapi/Wapi.js] debug: client disconnection with session id 5e0e625ecc8296cb78101f79
Fri Jan 03 2020 00:00:49 GMT+0000 (UTC) [node@sippo-server-6b7c6f7684-lsxp5<1>:/wac/lib/core/Sessions.js] debug: Remove session due expiration, [ '5e0e625ecc8296cb78101f79' ]
Fri Jan 03 2020 00:00:49 GMT+0000 (UTC) [node@sippo-server-6b7c6f7684-lsxp5<1>:/wac/lib/services/UserSettings.js] debug: user4@acme.com disconnected