Building the SDK

This document describes how the C++ SDK build system works and what the user has to do in order to be able to build C++ SDK. Currently the supported compilers are GCC 4.9 and Clang 7 (falls back to GCC for linking), newer versions of this compilers should also work fine. The supported targets are: x86_64-pc-linux-gnu and arm-linux-gnueabihf-gnu.

Getting the SDK source

The latest version of the SDK source code can be downloaded here

Why to use CMake?

CMake is the “de facto” standard for a build system when working with C++ projects. Originally this project was part of Buildroot build and just used plain Makefiles. The current build system implementation is completely agnostic from any external build system used. Wwe decided to migrate to CMake because of the following reasons: support for adding tests, support for adding 3rd party libraries to the build, integration with other CMake projects, etc. If you have some experience with modern CMake you will be able to easily understand and handle this project, there is nothing special about it apart from an extensive use of ExternalProject.

The C++ SDK does a self-contained build using CMake, the only dependency required to build the system is a complete compiler toolchain. All the dependencies required to build the project, like WebRTC, are gathered and compiled using CMake’s ExternalProject. The project can be built for ARM and x86_64 using GCC and Clang. The used compiler must support C++11.

At the moment of writing, GCC 4.9 is used for ARM compilation, which is quite dated (first release is April 2014). Although updating the toolchain can be quite tempting please be sure to maintain a compatible standard C and C++ library implementations, as the currently used ARM toolchain is the one used to build the client’s rootfs and just updating it to a newer one can break the system.

Docker containers are available also in order to create a controlled environment for the system built, it’s usage is highly recommended. More information on how to use them as well as detailed instructions of every command needed to build the system can be found in the Sippo C++ repository documentation inside the “docs” folder.

Using Docker

Docker images and scripts are provided within this project, they are intended for helping the developer in building the system. This allows us as developers to have a fixed environment for building (where be can have absolute control over system dependencies) and a common environment for all the developers

Building the container

Prior to any compiling the container that will take care of the compilation must be build. For doing that we can use the helper script docker/build.sh giving it an image name.

./docker/build.sh gcc49-builder # GCC 4.9 builder
./docker/build.sh linaro-builder # GCC 4.9 ARM builder
./docker/build.sh clang-builder # Clang builder ARM and x86_64

Building clang-builder the other two will also be built, as it is a superset of linaro-builder which indeed is a superset of the gcc49-builder.

Getting into the build environment

In order to start building we must get a shell in the image we’ve just created in the above step, luckily, the docker/connect.sh script can help us with that. For running an environment we just have to call the script with the image name.

./docker/connect.sh gcc49-builder # GCC 4.9 builder
./docker/connect.sh linaro-builder # GCC 4.9 ARM builder
./docker/connect.sh clang-builder # Clang builder ARM and x86_64

After it finished you should be facing a prompt like this

<your_username>@machine:

This building process will try to mimic your user directory structure, the project path inside the docker should be the same that the one in the host machine. Doing this in that way will make your host see the build files as if they were built in the host whit the current user.

Build preparation

You need to create a building folder and get into it:

mkdir -p ./build
cd build

Now in the build folder, we must tell cmake to configure our project. It’s important to provide a toolchain file when building for ARM, without always generate AMD64 builds.

# AMD64 with GCC 4.9 when using gcc49-builder or Clang when clang-builder
cmake ../

# ARM build using GCC 4.9
cmake ../ -DWITH_TOOLCHAIN_FILE=../cmake/cmake-arm-toolchain

# ARM build using Clang and GCC 4.9
cmake ../ -DWITH_TOOLCHAIN_FILE=../cmake/cmake-arm-clang-toolchain

If you reconfigure the project do not use the CMAKE_TOOLCHAIN_FILE option again. Toolchain information it’s already cached and using it again will make your build misbehave.

Building

For building it’s just enough to run make:

make # VERBOSE=1 for verbosity

Any change done to the project will need a rebuild, for that just running make again will make it.

Installing

Installing it’s done just by executing a make install:

make install

# To install in a different folder use the DESTDIR flag
make DESTDIR=YOUR_INSTALL_FOLDER_PATH install

Linking

This section describes the current linking process. The gist of this method it’s reducing symbol visibility and use heavy versioned DSOs.

Changes

  • Uses mostly DSO (dynamic shared objects) instead of statically linking

  • Versions the symbols in the DSO, making sure every project uses the right symbols

  • Limits the symbol visibility reducing noise and enabling easier ABI compatibility

Advantages

  • No more symbol duplication, because of symbol versioning

  • Less space usage (No multiple libraries containing same symbol definitions)

Disadvantages

  • Adds more complexity (however doing this without DSO could be even complex, symbol-prefixing)

  • Requires installation of more files

  • Requires manual compilation of projects that do not offer compilation as DSOs

Issues

  • WebRTC continues being an static binary

Behaviour example

We use “->” to represent dependency relationships, “a -> b” means a has undefined symbols that are found in b, so a depends on b. When: “a -> b (visibility symbol)”, means a will have a symbol with name “symbol” and visibility “visibility”, which can be U (undefined, resolve at dynamic linking) or T (in text). This shows a complex linking case:

  • curl -> libssl (U SSL_CTX_new@@openssl.version)

  • sioclient -> libssl (U SSL_CTX_new@@openssl.version)

  • sippo -> sioclient (U SSL_CTX_new@@openssl.version)

  • sippo -> webrtc (T SSL_CTX_new)

  • sippo_e2e_tests -> curl (U SSL_CTX_new@@openssl.version)

  • sippo_e2e_test -> sippo (U SSL_CTX_new@@openssl.version)

  • sippo_e2e_test -> webrtc (T SSL_CTX_new)

Symbol versioning

Something that could easily go unnoticed is the symbol version that takes place when building this project. Please do not remove it or it will completely break the project in unexpected ways.

Every exported symbol on the project dependencies is versioned in order to avoid symbol conflicts between with system libraries and between dependencies. This actually could happen easily with the SSL libraries libcrypto and libssl. libwebrtc uses its own SSL implementation called BoringSSL, some libraries of the project depend on a SSL implementation (can be LibreSSL or OpenSSL) and we also have SSL libraries in the host system. These libraries have the same symbol names, and it’s easy to end up having one symbol from one and other from an incompatible version, this will not cause a compilation error, ending in a runtime fail instead. Because of that we use symbol versioning, so we can ensure that every library it’s linked against what it should.

You must also be careful when running any executable, most of the time setting a runtime search path or something similar will be needed.

Handling runtime errors

First of all, try to build the project for your current architecture (usually x86_64) and run it in a controlled environment so you can easily attach your preferred debugger and handle it using your tools. This project also has a rootfs and a virtual machine image ready for testing the application in an environment similar to the one in production. Inside this rootfs a debugger could be attached remotely.

The SDK has support for building with Clang sanitizers using the options: ASAN_BUILD, TSAN_BUILD, MSAN_BUILD, and UBSAN_BUILD. These have proven to be very useful when dealing with race conditions and memory management. Valgrind with Callgrind have also proved to be useful when having performance issues.