Ansible configuration

Once that the Ansible package is installed in the control node (the machine that will be used for the installation), the next step is to download the Quobis wac installer.

Getting the Quobis wac installer

The Quobis wac installer can be obtained in two different ways:

  • If you have access to a Quobis Gitlab account, you can clone the repository with the following command that will create a directory name “k8s-installer” with the installer files:

    $ git clone --depth 1 --branch 2.35.0  https://gitlab.com/quobis/devops/k8s-installer.git  # use this branch for v5.0.0
    
  • Otherwise, please contact your support team and you’ll get a ZIP file containing the same source code and version.

Once you have the files in you computer, this is the content that you’ll see inside the “k8s-installer” directory. The “ansible” folder contains the host.conf file, the playbooks and the vault file. The “kubernetes” folder contains the Kubernetes manifests that are used during the installation.

$ ls -l
-rw-r--r--   1 owner  staff  156583 23 dic 17:40 CHANGELOG.md
-rw-r--r--   1 owner  staff     483 23 dic 17:40 README.md
drwxr-xr-x  14 owner  staff     448 23 dic 17:40 ansible
drwxr-xr-x  61 owner  staff    1952 23 dic 17:40 kubernetes

Setting up the inventory

Now it’s time to setup the inventory that tells Ansible where the managed nodes are located. First, you need to copy the “default” inventory to take it as reference:

$ cp -r k8s-installer/ansible/inventory/default k8s-installer/ansible/inventory/<your_company_name>

Now, remove the default passwords from the vault inventory in order to add the new ones (replace <your_company_name> with your inventory name):

$ cat /dev/null > k8s-installer/ansible/inventory/<your_company_name>/group_vars/all/vault.yml

After that, you need to set up the cluster hosts’s IPs and credential to allow Ansible to execute the automated tasks. You will find a YAML file with all entries enclosed in brackets [] that are referred to as “hosts”. On the Kubernetes deployment you will find these five types of hosts:

  • [master] The machines where the kubectl will run. All the cluster management is done from these machines, as they are the control panel of the deployment. Three instances are required when deploying on VMware.

  • [node] Working nodes, the ones that do the job. They handle traffic, run services, etc. The minimum number of working nodes is three.

  • [nginx] If your system requires an external entry point, this is it (only for on-prem, AWS uses the ELB service)

  • [nfs] Only on onPremise deployments, it will handle a distributed storage system (AWS uses the EFS service)

  • [turn] Server where the TURN services will be deployed.

For each host you must indicate:

  • The node’s IP address for SSH connection (reachable from Ansible control node)

  • ansible_user user: name used by Ansible to log into the remote machine.

  • ansible_ssh_pass: user pass used by Ansible to log into the remote machine.

  • ansible_sudo_pass: sudo pass needed by the ansible_user to log as administrator on the remote machine

Note

Use single quotes to enclose values that include spaces, such as passwords or file locations

The hosts file can be edited with any available text editor, such as “vi”:

$ vi k8s-installer/ansible/inventory/<your_company_name>/hosts

This is the default content of the k8s-installer/ansible/inventory/<your_company_name>/hosts file. You need to fill in the appropriate IP address for each virtual machine:

 1 [master]
 2 put_here_master_IP ansible_user=<user> ansible_ssh_pass=<password> ansible_sudo_pass=<password> ansible_ssh_private_key_file=/home/user/.ssh/key.pem ssh_key_file='~/.ssh/id_rsa'
 3
 4 [node]
 5 put_here_node_1_IP ansible_user=<user> ansible_ssh_pass=<password> ansible_sudo_pass=<password>
 6 put_here_node_2_IP ansible_user=<user> ansible_ssh_pass=<password> ansible_sudo_pass=<password>
 7
 8 [nginx]
 9 put_here_nginx_IP ansible_user=<user> ansible_ssh_pass=<password> ansible_sudo_pass=<password>
10
11 [nfs]
12 put_here_nginx_IP ansible_user=<user> ansible_ssh_pass=<password> ansible_sudo_pass=<password>

Installing the certificates

Now, you need to provide the digital certificates for the following services:

Certificates for the ingress controller

We created a folder on the Ansible installer that will help you to have all ready before start. Save the following files in the k8s-installer/ansible/ssl/ folder:

  • certName.crt, certName.key files: public and private certificates to be used on the Nginx public interface, as this is the main entry point.

  • APIcertName.crt, APIcertName.key files: public and private certificates to be used on the Nginx internal interface. Any self signed certificates will do the work. The sippo-server service uses it on the wiface1 interface.

Certificates for push notification services

Save the following certificates in the kubernetes/prosody/configmaps/conf/certs.d/ path:

  • apn.cer, apn.crt.pem, apn.key and apn.key.pem files: public valid certificates for Apple push services.

  • voip_services.cer, voip_services.cer.pem files: public valid certificates for Apple VoIP services.

Vault management

Passwords are stored and encrypted in the group_vars/all/vault.yml vault file. Some of them must be introduced in this file manually. Additionally, the remaining passwords can be included in this file following this steps:

First, create a master password for your environment. We recommend storing that password in a password manager like LastPass or Bitwarden.

Next, create a new file named vault-key in inventory/<inventory name>/. This file must contain your master password.

Once done, you can create passwords for each service with the next command:

$ ansible-playbook -i inventory/<inventory name> main.yml --tags "passwords-deploy"

Once the passwords-deploy task has finished, and your vault file is populated with new random passwords, you can delete or leave the vault-key file.

Additionally, you need to include the dockerPassword with the registry credential provided by Quobis.

$ ansible-vault encrypt_string --vault-id <inventory name>@prompt '<registry-token>' --name 'dockerPassword' >> <inventory name>/group_vars/all/vault.yml

Playbook configuration

Now it’s time to configure the Ansible playbooks. Playbooks are the files where Ansible code is written.

The Quobis wac has all the available configuration files into k8s-installer/ansible/inventory/<your_name_company>/group_vars/all folder.

You need to change the value of some parameters according to your deployment needs. The following tables explain which ones of these values must be changed and how.

Registry credentials

Docker credentials

Parameter

Value

Comment

registryServer

registry.gitlab.com

Use GitLab registry

registryNamespace

quobis

Do no change

dockerUsername

(blank)

Choosen username

dockerEmail

(blank)

Choosen email

Deployment options

Configuration parameters

Parameter

Value

Comment

infrastructure

‘omprem’

Type of infrastructure in deployment. Options: ‘amazon’ or ‘onprem’

persistent

true

You can select if your infrastructure will have persistent data (databases, upload files, etc)

orchestrator

“k8s”

You can select the type of orchestrator to deploy de cluster. Options: k8s (production) and k3s (lab only).

maxStorageQuota

‘60Gi’

Maximum disk space to keep databases, shared files, list of domains in messaging server and backups.

syncDate

true

True if containers’ synchronization will be done from host machines

timeZone

/usr/share/zoneinfo/Europe/Madrid

If syncDate is true, the timezone selected from host machines to be synchronized

Container versions

This is the list of the version of each container for version 5.0.0:

Please note that the format changes where we can have repeated values, such as in the case of the parameter “taggedMonitoringMachines”. In this case, the value name start with a hyphen (-):

tagged: false

taggedSippoMachines:
    - name: kubernetes-node1
    - name: kubernetes-node2

taggedMonitoringMachines:
    - name: kubernetes-node1

Nginx

This configuration is only required for on-premises deployments:

Parameter

Value

Comment

loadbalancer

true || false

serverName

<server_name>

If true, Nginx-ingress will be deployed as LoadBalancer

nginx_key

<SSL_certificate_name>.key

nginx_crt

<SSL_certificate_name>.crt

nginx_sapi_key

<SSL_certificate_name>.key

nginx_sapi_crt

<SSL_certificate_name>.crt

ingressListeningPort

32639

Port where http_proxy is listening (internally). No need to expose it externally

WAC

Parameter

Value

Comment

defaultSippoServerAuth

true

If true, enable default WAC oAuth authentication

dev

false

If true, include static DNS server name entry in WAC container

mailHost

192.158.1.39

IP of the email server

mailPort

25

Por of the email server

mailFrom

change@me.com

Mail that Quobis WAC uses to send emails

meetingLinkHost

https://<server_name>

meetingPath

m

Meetings URL path, defaults to https://servername/m/meeting-id

pstnExtension

PSTN-reachable dial-in number to join a Quobis WAC meeting in E.164 format

pushActivated

false

If true, mobile push notifications in mobile apps are activated

startUpTimeout

15

WAC start up timeout in seconds.

Quobis collaborator

The following parameters are only required when installing Quobis collaborator. If you deployment the system and don’t plan to deploy Quobis collaborator, you can leave them blank.

Parameter

Value

Comment

customWebphoneStaticsSounds

false

Custom Sounds

customWebphoneStaticsI18n

false

Custom icons

customWebphoneStaticsFonts

false

Custom fonts

customWebphoneStaticsImages

false

Custom image

showContentInfo

true

Show instructions to the end user about how to use Quobis collaborator

webphoneDefaultSources

wac

Select the default contacts sources

webphoneDocumentTitle

<name of your system>

Select the HTML title of the document

webphoneDefaultDomain

quobis

Allows to enforce a domain in the login process. Cannot be left blank as it’s used also for anonymous users

webphoneOauthRedirectUri

“{{ serverName }}/o/adfs/callback”

URL to redirect the oauth login to oauth server when using oauth2proxy

QSS

The following table lists the configuration parameters of the signaling server (QSS):

Parameter

Value

Comment

resolveGroups

false

legacyUserGroupResolution

false

onlyOneDeviceWithInvitations

false

activateParallelSip

false

When true, parallel SIP routing is activated

voiceMailExtension

999

VoiceMail extension number

setOwner

false

closeRoomWhenOwnerLeaves

false

updateRoomOnwerWhenLeaves

false

notInACall

false

userIdAsRoomId

false

allowInvitesWhileInCall

true

When true, incoming calls can be retrieved while on call

participantLimit

10

Set the participant limit on a same call

logChannelRepository

memory

KeyCloak

Parameter

Value

Comment

keycloakDatabaseUsername

keycloack

Set the database user

keycloakDatabaseName

keycloak

Set the database name

Kitter

Parameter

Value

Comment

realm

quobis

welcomeEmail

true

When true, an email is sent to the created user. Set to false to avoid this behaviour.

updateUserDomain

true

phoneNumberRequired

true

XMPP server

The XMPP server can either use internal storage or an external database (recommended). If xmppDatabase is set to true. PostgresSQL will be used and a valid version must also be entered.

Parameter

Value

Comment

maxHistoryMessages

50

mucLogExpiresAfter

never

xmppDatabaseVersion

12.1-alpine

Set PostgreSQL database versión. Does not need to be changed

xmppDatabase

true

xmppDatabaseUsername

prosody_user

Username of the XMPP database. Does not need to be changed otherwise changes are made in the XMPP server

xmppDatabaseName

prosody

Name of the XMPP database. Does not need to be changed otherwise changes are made in the XMPP server

xmppTLS

false

Enable XMPP messaging over TLS

xmppPort

5222

Ingress

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Includes two network protection related features: Anti-DoS protection and CORS.

Parameter

Value

Comment

exposeNginxIngress

false

assigToNode

true

Deploy nginx-ingress in nodeNginxIngress

nodeNginxIngress

ops-ci-node2

commonName

‘*.quobis.com’

Must go between quotes

wildCard

true

If true, commonName must have format of wildcard certificate

ddosProtection

false

When true, a DDOS protection is activated

rps

5

If DDOS protection is activated, maximum request per second allowed to all endpoints

rpsHigh

20

If DDOS protection, maximum request per second to Quobis collaborator endpoints

enableTeal

false

Teal deployment blue/green

cookieName

canary

Certificates

These parameter are required only when using the “Let’s encrypt” https://www.letsencrypt.org/. Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. It is a service provided by the Internet Security Research Group (ISRG).

Parameter

Value

Comment

letsEncript

false

awsRegionLetsEcript

<amazon_regin>

provider

(blank)

acmeDomains

(blank)

letsEncriptEmail

(blank)

Email to be used in the LetsEncrypt account

Media server

Parameter

Value

Comment

codec

“vp8”

sfuWrapper: labelWrapper

sfu1

Label for the wrapper, at least one is needed. Defaults to “sfu1”

sfuWrapper: ipJanus

10.1.21.88

IP of the Janus SFU

sfuWrapper: name

audiomixer

sfuWrapper: node

ms1

asteriskVersion

18.2.0

Version of the Asterisk service used as audiomixer.

confbridgeTimeout

28000

janusVersion

0.9.5

mediaServersInK8s

falase

Deploy Asterisk, Janus and Kamailio in K8s cluster

webRtcIp

20000

Exposed Janus public IP to media stream

webRtcStartPort

20499

Exposed Janus start port to media stream

webRtcEndPort

21000

Exposed Janus end port to media stream

taggedMediaMachines.name

kubernetes-janus

taggedMediaMachines.tag

ms1

SIP integration

Parameter

Value

Comment

announceOnlyUser

false

Announcement if the user is alone in the conference room

rtpIp

10.1.21.93

Audiomixer external media IP

rtpStartPort

10000

Audiomixer start port for RTP port range

rtpEndPort

10249

Audiomixer start port for RTP port range

kamailioVersion

5.4.4

Kamailio and registry image version

sip-proxy.name

sip-proxy-1

Kamailio deployment pod name

sip-proxy.node

ms1

Kamailio deployment node

sip-proxy.sipIp

10.1.21.93

Kamailio deployment IP

sip-proxy.sipIpPrivate

10.1.21.93

Kamailio deployment private SIP IP

sipIp

10.1.21.93

Kamailio external SIP IP

sipPort

5060

Kamailio external SIP port

pbxIp

10.1.3.2

External endpoint SIP IP

pbxPort

10.1.3.2

Customer endpoint SIP port

pbxDomain

R-URI, FROM and TO domain to requests

sipRegistration

false

Send REGISTER and authenticated REGISTER requests from Kamailio to PBX

sipAuthentication

false

Authenticate INVITE and REFER from external PBX to Kamailio

MWI

false

Message Waiting Indicator

multiLocation

false

location.url:

(blank)

location.group

(blank)

An example of a High-Availability SIP proxy configuration follows:

1sipProxy:
2 - name: sip-proxy-1
3   node: ms1
4   sipIp: 10.1.21.93
5   sipIpPrivate: 10.1.21.93
6 - name: sip-proxy-2
7   node: ms1
8   sipIp: 10.1.21.96
9   sipIpPrivate: 10.1.21.96

TURN server

The TURN server is installaed as a service, not in the Kubernetes cluster.

Parameter

Value

Comment

turnServerEnabled

True

When true, the TURN server is installed

turnDomain

(blank)

TURN server public domain

turnInternalIP

(blank)

Internal media server IP

turnPublicIP

(blank)

TURN server media server IP

turnPort

443

Port to be used by the TURN server

Recording

Parameter

Value

Comment

enableRecording

False

enableEncryption

False

File publickey.acs must be included in kubernetes/recording

encryptionMail

(blank)

recordingVersion

3.5.2

recordingType

none

Must be one of [none, video, audio, all] as explained in the recording section

Kubernetes API

Parameter

Value

Comment

kapiVersion

1.6.2

enableKapiui

True

extended_onhold

false

seconds_to_bye

3

Teal service

Please note that the password for this configuration must be defined in the vault, not here.

Parameter

Value

Comment

tealVersion

1.2.0

tealDatabaseName

teal_prod

tealDatabaseUsername

teal

Storage

The first two parameters applies for on-prem installation only, the following ones apply for the AWS configuration only.

Parameter

Value

Comment

nfsServer.ip

10.1.21.27

nfsPath

“/home/nfs”

efsServer.efsFileSystemId

fs-XXXXXXXXXXXXXXX

efsServer.awsRegion

eu-west-3

efsProvisionerVersion

v2.4.0

efsPath

“/home/nfs”

Database

Parameter

Value

Comment

mongoAtlas

False

If true, the Mongo Atlas service is used instead of a local one https://www.mongodb.com/es/cloud/atlas

enableMongoOperator

true

If true, the database is deployed inside the cluster

replica

True

For multiple instances of the database deployment set replica:true and use a comma-separated array on the databaseUrl parameter

databaseVersion

5.0.0

Mongo database version (Does not apply to Mongo Atlas)

wacDatabaseName

wacDev

Set the WAC database name

qssDatabaseName

qss

Set the QSS database name

dispatcherDatabaseName

dispatcher

Set the Dispatcher database name

recordingDatabaseName

recording

Set the Recording database name

irsDatabaseName

irs

Set the IRS database name

mongoAtlasUser

<your_Mongo_Atlas_username>

Set it when using a Mongo Altas database

mongoAtlasDomain

<your_Mongo_Atlas_domain>

Set it when using a Mongo Altas database

mongoOpsUser

<your_Mongo_operator_username>

Set it when using a Mongo operator

mongoOpsDomain

<your_Mongo_operator_domain>

Set it when using a Mongo operator

noDnsResolution

false

Database hosted in external servers

databaseUrl

“quobis:{{ mongoPassword }}@database-mongo-0.database-mongo-svc.{{ namespace }}.svc.cluster.local

If the hostname of the machine do not resolve via DNS, it is needed to add it in each container, so set to true

databases.hostname

(blank)

Hostname of the Mongo database (eg: mongo1.internal.quobis.com)

databases.ip

(blank)

Hostname of the Mongo database (eg: mongo1.internal.quobis.com)

Cluster maintainer

Parameter

Value

Comment

maintainerVersion

1.2.9-kubernetes1.15.3

deleteConferences

True

Delete conferences with state finished and older than CALL_HISTORY_DAYS days

callHistoryDays

1

Used to delete old conferences (see line above)

timeToPurgeChats

6

Purge prosody chats older than {{ timeToPurgeChats }} in months

timeToPurgeAnonymousUser

8

Purge anonymous users older than {{ timeToPurgeAnonymousUsers }} in hours

timeToKeepChannels

8

Delete channels older than X hours

timeToRemoveInvites

8

Delete invites older than X hours

timeToRemoveConferneceState

8

Delete conferenceStates older than X hours

deleteResolveSubscription

true

Delete resolveSubscription collection

deletePresenceSubscription

true

Delete presenceSubscription collection

deletePushSession

false

Delete push session

timeToKeepPushSession

8

Delete push session older the X days

scheduleCron

0 * * * *

Must be enclosed between quotes

enableBackup

True

enableDisasterRecovery

False

backupFolder

(blank)

Name of backupFolder available to be used in the NFS or EFS storage

Quobis manager

Parameter

Value

Comment

grafanaHost

(blank)

grafanaUser

(blank)

grafanaVersion

8.2.7

lokiVersion

1.4.0

prometheusVersion

v2.22.2

managerVersion

1.6.1

managerUsername

qa@quobis

It’s advisable to create a different user (admin role) for Quobis Manager to avoid sharing sippo-server admin credentials

sippoExporter

1.6.0

support_sql_email

(blank)

DBDROPDAYS

7

Homer database rotate period

hepServerIp

(blank)

HEP host IP

hepNode

(blank)

HEP host name