Advanced Tox usage

This article explains the Tox configuration used by several of my Python projects, which both tests the module on multiple versions of python and runs testing-related tools for style guides, static code checking and coverage reports.

The tox.ini used here is taken from Supermann, and the source can be found here.

Running pytest to find and run tests

py.test is used to discover, run and write tests. It uses tox.ini as a configuration file by default, and the two tools work very well in combination. The first two sections tell Tox to use the default Python 2.6 and 2.7 environments, to install pytest and mock as test dependancies, and to uses py.test to run the modules tests.


commands=py.test supermann

addopts=-qq --strict --tb=short

Running flake8 to check code

flake8 wraps several Python tools for static checking of code - pep8 for style, and pyflakes for static error checking. This configures it to use tox.ini for it’s configuration - reducing the number of configuration files in the repo.

commands=flake8 --config=tox.ini supermann


Running coverage to show untouched code

Coverage is by far the most verbose tool configured in my tox.ini, but the HTML reports it generates are very useful for working out which parts of my code aren’t being run by my tests. This does several new things with tox: it runs two commands (one to generate the data, and another to render a HTML report), and uses the dependencies from the existing testenv configuration.

Coverage itself is told to use tox.ini for it’s configuration, and is configured to ingnore various parts of the codebase (the tests folder, and lines of code that should never run). The coverage files are placed in .tox to keep the repository tidy while working.

    coverage run --rcfile tox.ini --source supermann -m py.test
    coverage html --rcfile tox.ini


    def __repr__
    raise NotImplementedError
    class NullHandler

title=Supermann coverage report

Tox and Travis CI

Until recently, my approach to building python projects that used Tox on Travis CI was simply to have two separate configuration files, each duplicating the others settings.

This would normally work okay, with both Tox and Travis defining a set of python runtimes to use and a command to run on them. However, when it came to the more complex tox.ini file I’m using for python-dice, recreating this in the Travis configuration would have difficult to do, and require a lot of maintenance when the Tox settings changed.

The solution? Using Travis to run Tox, using it’s matrix feature and the TOXENV environment variable to create a different build for each of several Tox environments, like so:

language: python
- TOXENV=py26
- TOXENV=py27
- TOXENV=py32
- TOXENV=py33
- TOXENV=pypy
- pip install tox
- tox

Playing with the Github Timeline data

I came across ‘The Open Source Report Card’ today, which included some interesting results on my report. It claimed I’d contributed to repositories using VimL and Go, neither of which I could remember doing anything in relation to.

Looking into how it worked led to the Github Archive, which is available as a dataset on Google BigQuery. Playing with this eventually led to finding the problem - the ‘contributions’ graph included all of a users events. This included ‘star’ (previously ‘follow’) events, so it was counting repositories I’d starred as repositories that I had contributed to.

A more accurate result was achieved with the following query:

    /* Count the number of my events by language excluding stars/follows */

    SELECT repository_language, count(repository_language) as n,
    FROM [githubarchive:github.timeline]
    WHERE actor="borntyping" AND type != "WatchEvent"
    GROUP BY repository_language

The Github Archive documentation seemed to be devoid of simpler example queries, so this is how I reached the above one, starting from one adapted from the primary example on

    /* List all of my repositories, counting the number of events */

    SELECT repository_name, repository_owner, count(repository_name) as events,
    FROM [githubarchive:github.timeline]
    WHERE repository_owner="borntyping"
    GROUP BY repository_name, repository_owner
    ORDER BY events DESC

    /* List all of my repositories, counting the number of events, and sorted by language */

    SELECT repository_name, repository_owner, repository_language, count(repository_name) as events,
    FROM [githubarchive:github.timeline]
    WHERE actor="borntyping"
    GROUP BY repository_name, repository_owner, repository_language
    ORDER BY repository_language, events DESC

    /* Count the number of my events by language */

    SELECT repository_language, count(repository_language) as repository_language_count,
    FROM [githubarchive:github.timeline]
    WHERE actor="borntyping"
    GROUP BY repository_language
    ORDER BY repository_language_count DESC

Spotter at Show and Tell

My second talk at a BCS Mid Wales Show and Tell event, this time on Spotter, a command line tool I wrote. The slides are available here.

ZSH startup time

I’ve had issues with zsh starting up and running very slowly for a while. I’d assumed the problem simply lay with the number of plugins and other scripts run by oh-my-zsh, but a bit of research has found two solutions that have worked very well for me (your mileage may vary):

  • Removing the github plugin. I don’t think I was using it much, if at all, and removing it stoped the noticable delay each time the prompt was shown (Source)
  • Adding skip_global_compinit=1 to .zshenv. This dropped the startup time of zsh from ~0.85 seconds to ~0.15 seconds. This was benchmarked using /usr/bin/time zsh -i -c exit (Source)