Skip to main content

JumperBot

In a previous blog I described a simple echo bot, that echoes back anything you say to it. This time I will talk about a bot that generates traffic for the chat server, that can be used for load-testing both the chat server as well as any chat clients connected to it.
I've dubbed it JumperBot - it jumps between chat rooms, saying a few random phrases in each room, then jumping to the next one. This bot builds on the same framework as the EchoBot - refer to the previous blog if you are interested in the details. The source lives on GitHub: https://github.com/snorristurluson/xmpp-chatbot

Configure the server

In an earlier blog I described the setup of Prosody as the chat server to run against. Before we can connect bots to the server we have to make sure they can log in, either by creating accounts for them:
prosodyctl register jumperbot_0 localhost jumperbot
prosodyctl register jumperbot_1 localhost jumperbot
...
or by setting the authentication up so that anyone can log in.
We also need to enable multi-room chat - do this by adding this to the Prosody configuration file (near the bottom of the file, in the Components section):
Component "conference.localhost" "muc"

JumperBot

The Jumperbot is similar to the EchoBot, but rather than handling incoming messages we set up an asyncio task as this bot is proactive where the echo bot was reactive. The task is created when the connection is made:
    self.task = asyncio.get_event_loop().create_task(self.run())
The run method looks like this:
    async def run(self):
        while True:
            self.join_random_room()
            n = random.randint(5, 10)
            for i in range(n):
                if self.transport.is_closing():
                    return
                self.say_random_phrase()
                try:
                    await asyncio.sleep(random.random() * 10.0 + 5.0)
                except asyncio.CancelledError:
                    return
This implements the bot behavior as described above - joins a random room, says a few random phrases, then repeats the process in the next room. The asyncio.sleep command is very important - this allows other tasks to run concurrently with this loop.

BotManager

Running a single JumperBot doesn't really generate much traffic and rather than running multiple processes, let's make use of asyncio and create multiple bots as tasks. For that, we set up a BotManager to create and monitor bots:
    manager = BotManager()

    manager.create_bots(args)
    loop = asyncio.get_event_loop()

    loop.create_task(manager.monitor_status(args.monitor))
The bot manager looks like this:
class BotManager(object):
    def __init__(self):
        self.bots_running = {}
        self.bots_logged_in = {}
        self.args = None

    def create_bot(self, botname, args):
        bot = JumperBot(self, args.host_name, botname, "jumperbot",
                        args.num_rooms)
        if args.listener:
            bot.listener = True

        self.connect_bot(bot, args)

        return bot

    def connect_bot(self, bot, args):
        loop = asyncio.get_event_loop()
        handler = loop.create_connection(lambda: bot, args.server_name, 5222)
        loop.create_task(handler)

    def create_bots(self, args):
        self.args = args
        for i in range(args.num_bots):
            botname = "jumperbot_{0}".format(i)
            bot = self.create_bot(botname, args)
            self.bots_running[bot.username] = bot
Each bot is a protocol instance associated with its connection and gets a callback, data_received whenever something is received from the server. The bot also runs its own task for initiating its chattiness, as described above.
There is one more task - the monitor:
    async def monitor_status(self, display_stats):
        blinkers = [" ", ".", ":", "."]
        blinker_index = 0
        template = "{2} bots running, {3} logged in {4}"
        while True:
            await asyncio.sleep(1)
            if display_stats:
                print(template.format(
                    len(self.bots_running),
                    len(self.bots_logged_in),
                    blinkers[blinker_index]),
                    end="\r"
                )
            blinker_index += 1
            blinker_index %= len(blinkers)

            if len(self.bots_running) == 0:
                asyncio.get_event_loop().stop()
                return
If no bots are running, the loop is stopped.

Trying it out

The bots do their chatter in rooms named bot_room_0 through bot_room_9. Connect to the server with Swift (or your favorite chat client) and join one or more of those rooms to listen in. You can also run the jumperbot with a --verbose flag to see all the XMPP traffic in the log.

Comments

Popular posts from this blog

Mnesia queries

I've added search and trim to my expiring records module in Erlang. This started out as an in-memory key/value store, that I then migrated over to using Mnesia and eventually to a replicated Mnesia table. The fetch/1 function is already doing a simple query, with match_object. Result=mnesia:match_object(expiring_records, #record{key=Key, value='_', expires_at='_'}, read) The three parameters there are the name of the table - expiring_records, the matching pattern and the lock type (read lock). The fetch/1 function looks up the key as it was added to the table with store/3. If the key is a tuple, we can also do a partial match: Result=mnesia:match_object(expiring_records, #record{key= {'_', "bongo"}, value='_', expires_at='_'}, read) I've added a search/1 function the module that takes in a matching pattern and returns a list of items where the key matches the pattern. Here's the test for the search/1 function: search_partial_…

Replicated Mnesia

I'm still working on my expiring records module in Erlang (see here and here for my previous posts on this). Previously, I had started using Mnesia, but only a RAM based table. I've now switched it over to a replicated disc based table. That was easy enough, but it took a while to figure out how to do, nonetheless. I had assumed that simply adding ... {disc_copies, [node()]} ... to the arguments to mnesia:create_table would be enough. This resulted in an error: {app_test,init_per_testcase, {{badmatch, {aborted, {bad_type,expiring_records,disc_copies,nonode@nohost}}}, ... After some head-scratching and lots of Googling I realized that I was missing a call to mnesia:create_schema to allow it to create disc based tables. My tests for this module are done with common_test so I set up a per suite initialization function like this: init_per_suite(Config) ->mnesia:create_schema([node()]), mnesia:start(…

Optimizing Wine on OS X

I've been doing some performance analysis of EVE running under Wine on OS X. My main test cases are a series of scenes run with the EVE Probe - our internal benchmarking tool. This is far more convenient than running the full EVE client, as it focuses purely on the graphics performance and does not require any user input.


Wine Staging One thing I tried was to build Wine Staging. On its own, that did not really change anything. Turning on CSMT, on the other hand, made quite a difference, taking the average frame time down by 30% for the test scene I used. While the performance boost was significant there were also significant glitches in the rendering, with parts of the scene flickering in and out. Too bad - it means I can't consider this yet for EVE, but I will monitor the progress of this. OpenGL Profiler Apple has the very useful OpenGL profiler available for download. I tried running one of the simpler scenes under the profiler to capture statistics on the OpenGL calls mad…