Skip to main content

Docker is cool

Writing the C++ code for the EVE launcher is but the first step in getting the launcher to run on EVE players machines. The code is then compiled, the resulting executable and all its dependencies staged in a folder. This folder is then used to create an installer as well as packaged for downloads for the launcher auto-update mechanism. We use TeamCity to run these builds of the launcher from the source, triggered by checkins.

Build agents

The machine used to build the launcher has to have the correct software set up - the correct version of Qt, the compiler and the installer framework. For Windows, this meant manually setting up a virtual machine by running the Visual Studio installer, followed by the Qt installer and so forth. The Mac was similar, except it wasn't a VM but a Mac Mini sitting on my desk.

The EVE launcher is a small project and the build process is about 6 minutes (including unit tests, compilation of launcher and updater, installer and upload to Amazon) so there really is no need for multiple build agents. It is still important to be able to recreate a build agent easily - computers can fail - so I made sure to document all the steps required to set up a build agent carefully on our internal wiki. For larger projects, where you might have many programmers checking code in to source control and builds might take a lot longer you would likely want to have multiple build agents to keep up with the checkins. In that case, a manual setup quickly becomes a big issue.

Automating the setup

With this in mind I wanted to try using Docker when setting up the build environment for a Linux version of the launcher. While CCP doesn't officially support Linux as a platform for EVE, we know that there are players out there running under Wine, and we do now offer Wine support through the launcher on the Mac. Getting the same setup to work on Linux didn't require that much effort, and I've been poking at that on the side.

A while ago I got the launcher running on my Ubuntu box, and lately have been setting up the automated builds. With Docker, I could set up the build agent with a script - the Dockerfile. This file specifies the base image to use, essentially which Linux distribution to use. Then the file lists commands that are run to set up the image. In my case I have two RUN commands - one with a long list of packages to install via apt-get install, followed by a curl command to get the AWS command line tools.

Not quite

The only caveat is that the Qt installer does not seem to be able to do an unattended installation without showing any UI. It is possible to pass a script to the installer that automates the installation, but it still insists on opening the UI and this fails when building the Docker container. My workaround was to manually install Qt on my machine, make a tarball from the Qt folder and use the ADD command to inject that into the image.

I set up a second Dockerfile for the environment to build Wine. In that case I could fully automate the setup as all the dependencies for Wine are available as packages that can be installed via apt-get install.

Versioning

Dockerfiles are small text files so they can easily be versioned in the same source control system as the build scripts. This makes it easier to manage changing dependencies, and if you ever need to rebuild older versions you get a definition of the build environment for that older version.

The perfect pair

Using Docker with TeamCity also means that you can have generic build agents even if you are running all sorts of different jobs with different requirements. The build agent itself doesn't need to have exactly the right software installed, with the right versions of everything - the build agent just needs to be able to run docker commands. The build job then uses docker run with the appropriate docker image - the image has been built with the appropriate dependencies for the job. This also implies that it is easy to have many agents running, even if you have very specific requirements for running the jobs.

I feel I've just scratched at the surface here - I've got a fairly simple setup that the does the job I need it to, but I'm sure I'll find an excuse to play around with this some more.

Comments

Popular posts from this blog

Large scale ambitions

Learning new things is important for every developer. I've mentioned  this before, and in the spirit of doing just that, I've started a somewhat ambitious project. I want to do a large-scale simulation, using  Elixir  and Go , coupled with a physics simulation in C++. I've never done anything in Elixir before, and only played a little bit with Go, but I figure,  how hard can it be ? Exsim I've dubbed this project exsim - it's a simulation done in Elixir. Someday I'll think about a more catchy name - for now I'm just focusing on the technical bits. Here's an overview of the system as I see it today: exsim  sits at the heart of it - this is the main server, implemented in Elixir. exsim-physics  is the physics simulation. It is implemented in C++, using the Bullet physics library. exsim-physics-viewer  is a simple viewer for the state of the physics simulation, written in Go. exsim-bot  is a bot for testing exsim, written in Go.

Working with Xmpp in Python

Xmpp is an open standard for messaging and presence, used for instant messaging systems. It is also used for chat systems in several games, most notably League of Legends made by Riot Games. Xmpp is an xml based protocol. Normally you work with xml documents - with Xmpp you work with a stream of xml elements, or stanzas - see https://tools.ietf.org/html/rfc3920 for the full definitions of these concepts. This has some implications on how best to work with the xml. To experiment with Xmpp, let's start by installing a chat server based on Xmpp and start interacting with it. For my purposes I've chosen Prosody - it's nice and simple to install, especially on macOS with Homebrew : brew tap prosody/prosody brew install prosody Start the server with prosodyctl - you may need to edit the configuration file (/usr/local/etc/prosody/prosody.cfg.lua on the Mac), adding entries for prosody_user and pidfile. Once the server is up and running we can start poking at it

Mnesia queries

I've added search and trim to my  expiring records  module in Erlang. This started out as an  in-memory  key/value store, that I then migrated over to  using Mnesia  and eventually to a  replicated Mnesia  table. The  fetch/1  function is already doing a simple query, with  match_object . Result = mnesia : match_object ( expiring_records , # record { key = Key , value = '_' , expires_at = '_' }, read ) The three parameters there are the name of the table -  expiring_records , the matching pattern and the lock type (read lock). The  fetch/1  function looks up the key as it was added to the table with  store/3 . If the key is a tuple, we can also do a partial match: Result = mnesia : match_object ( expiring_records , # record { key = { '_' , " bongo " }, value = '_' , expires_at = '_' }, read ) I've added a  search/1  function the module that takes in a matching pattern and returns a list of items wh