Skip to main content

Docker is cool

Writing the C++ code for the EVE launcher is but the first step in getting the launcher to run on EVE players machines. The code is then compiled, the resulting executable and all its dependencies staged in a folder. This folder is then used to create an installer as well as packaged for downloads for the launcher auto-update mechanism. We use TeamCity to run these builds of the launcher from the source, triggered by checkins.

Build agents

The machine used to build the launcher has to have the correct software set up - the correct version of Qt, the compiler and the installer framework. For Windows, this meant manually setting up a virtual machine by running the Visual Studio installer, followed by the Qt installer and so forth. The Mac was similar, except it wasn't a VM but a Mac Mini sitting on my desk.

The EVE launcher is a small project and the build process is about 6 minutes (including unit tests, compilation of launcher and updater, installer and upload to Amazon) so there really is no need for multiple build agents. It is still important to be able to recreate a build agent easily - computers can fail - so I made sure to document all the steps required to set up a build agent carefully on our internal wiki. For larger projects, where you might have many programmers checking code in to source control and builds might take a lot longer you would likely want to have multiple build agents to keep up with the checkins. In that case, a manual setup quickly becomes a big issue.

Automating the setup

With this in mind I wanted to try using Docker when setting up the build environment for a Linux version of the launcher. While CCP doesn't officially support Linux as a platform for EVE, we know that there are players out there running under Wine, and we do now offer Wine support through the launcher on the Mac. Getting the same setup to work on Linux didn't require that much effort, and I've been poking at that on the side.

A while ago I got the launcher running on my Ubuntu box, and lately have been setting up the automated builds. With Docker, I could set up the build agent with a script - the Dockerfile. This file specifies the base image to use, essentially which Linux distribution to use. Then the file lists commands that are run to set up the image. In my case I have two RUN commands - one with a long list of packages to install via apt-get install, followed by a curl command to get the AWS command line tools.

Not quite

The only caveat is that the Qt installer does not seem to be able to do an unattended installation without showing any UI. It is possible to pass a script to the installer that automates the installation, but it still insists on opening the UI and this fails when building the Docker container. My workaround was to manually install Qt on my machine, make a tarball from the Qt folder and use the ADD command to inject that into the image.

I set up a second Dockerfile for the environment to build Wine. In that case I could fully automate the setup as all the dependencies for Wine are available as packages that can be installed via apt-get install.

Versioning

Dockerfiles are small text files so they can easily be versioned in the same source control system as the build scripts. This makes it easier to manage changing dependencies, and if you ever need to rebuild older versions you get a definition of the build environment for that older version.

The perfect pair

Using Docker with TeamCity also means that you can have generic build agents even if you are running all sorts of different jobs with different requirements. The build agent itself doesn't need to have exactly the right software installed, with the right versions of everything - the build agent just needs to be able to run docker commands. The build job then uses docker run with the appropriate docker image - the image has been built with the appropriate dependencies for the job. This also implies that it is easy to have many agents running, even if you have very specific requirements for running the jobs.

I feel I've just scratched at the surface here - I've got a fairly simple setup that the does the job I need it to, but I'm sure I'll find an excuse to play around with this some more.

Popular posts from this blog

Mnesia queries

I've added search and trim to my expiring records module in Erlang. This started out as an in-memory key/value store, that I then migrated over to using Mnesia and eventually to a replicated Mnesia table. The fetch/1 function is already doing a simple query, with match_object. Result=mnesia:match_object(expiring_records, #record{key=Key, value='_', expires_at='_'}, read) The three parameters there are the name of the table - expiring_records, the matching pattern and the lock type (read lock). The fetch/1 function looks up the key as it was added to the table with store/3. If the key is a tuple, we can also do a partial match: Result=mnesia:match_object(expiring_records, #record{key= {'_', "bongo"}, value='_', expires_at='_'}, read) I've added a search/1 function the module that takes in a matching pattern and returns a list of items where the key matches the pattern. Here's the test for the search/1 function: search_partial_…

Waiting for an answer

I want to describe my first iteration of exsim, the core server for the large scale simulation I described in my last blog post. A Listener module opens a socket for listening to incoming connections. Once a connection is made, a process is spawned for handling the login and the listener continues listening for new connections. Once logged in, a Player is created, and a Solarsystem is started (if it hasn't already). The solar system also starts a PhysicsProxy, and the player starts a Ship. These are all GenServer processes. The source for this is up on GitHub: https://github.com/snorristurluson/exsim Player The player takes ownership of the TCP connection and handles communication with the game client (or bot). Incoming messages are parsed in handle_info/2 and handled by the player or routed to the ship, as appropriate. The player creates the ship in its init/1 function. The state for the player holds the ship and the name of the player. Ship The ship holds the state of the ship - …

Replicated Mnesia

I'm still working on my expiring records module in Erlang (see here and here for my previous posts on this). Previously, I had started using Mnesia, but only a RAM based table. I've now switched it over to a replicated disc based table. That was easy enough, but it took a while to figure out how to do, nonetheless. I had assumed that simply adding ... {disc_copies, [node()]} ... to the arguments to mnesia:create_table would be enough. This resulted in an error: {app_test,init_per_testcase, {{badmatch, {aborted, {bad_type,expiring_records,disc_copies,nonode@nohost}}}, ... After some head-scratching and lots of Googling I realized that I was missing a call to mnesia:create_schema to allow it to create disc based tables. My tests for this module are done with common_test so I set up a per suite initialization function like this: init_per_suite(Config) ->mnesia:create_schema([node()]), mnesia:start(…