This is the BuildBot manual for Buildbot version 0.8.2.
Copyright (C) 2005, 2006, 2009, 2010 Brian Warner
Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved.
The BuildBot is a system to automate the compile/test cycle required by most software projects to validate code changes. By automatically rebuilding and testing the tree each time something has changed, build problems are pinpointed quickly, before other developers are inconvenienced by the failure. The guilty developer can be identified and harassed without human intervention. By running the builds on a variety of platforms, developers who do not have the facilities to test their changes everywhere before checkin will at least know shortly afterwards whether they have broken the build or not. Warning counts, lint checks, image size, compile time, and other build parameters can be tracked over time, are more visible, and are therefore easier to improve.
The overall goal is to reduce tree breakage and provide a platform to run tests or code-quality checks that are too annoying or pedantic for any human to waste their time with. Developers get immediate (and potentially public) feedback about their changes, encouraging them to be more careful about testing before checkin.
Features:
The Buildbot was inspired by a similar project built for a development
team writing a cross-platform embedded system. The various components
of the project were supposed to compile and run on several flavors of
unix (linux, solaris, BSD), but individual developers had their own
preferences and tended to stick to a single platform. From time to
time, incompatibilities would sneak in (some unix platforms want to
use string.h
, some prefer strings.h
), and then the tree
would compile for some developers but not others. The buildbot was
written to automate the human process of walking into the office,
updating a tree, compiling (and discovering the breakage), finding the
developer at fault, and complaining to them about the problem they had
introduced. With multiple platforms it was difficult for developers to
do the right thing (compile their potential change on all platforms);
the buildbot offered a way to help.
Another problem was when programmers would change the behavior of a library without warning its users, or change internal aspects that other code was (unfortunately) depending upon. Adding unit tests to the codebase helps here: if an application's unit tests pass despite changes in the libraries it uses, you can have more confidence that the library changes haven't broken anything. Many developers complained that the unit tests were inconvenient or took too long to run: having the buildbot run them reduces the developer's workload to a minimum.
In general, having more visibility into the project is always good, and automation makes it easier for developers to do the right thing. When everyone can see the status of the project, developers are encouraged to keep the tree in good working order. Unit tests that aren't run on a regular basis tend to suffer from bitrot just like code does: exercising them on a regular basis helps to keep them functioning and useful.
The current version of the Buildbot is additionally targeted at distributed free-software projects, where resources and platforms are only available when provided by interested volunteers. The buildslaves are designed to require an absolute minimum of configuration, reducing the effort a potential volunteer needs to expend to be able to contribute a new test environment to the project. The goal is for anyone who wishes that a given project would run on their favorite platform should be able to offer that project a buildslave, running on that platform, where they can verify that their portability code works, and keeps working.
The Buildbot consists of a single buildmaster
and one or more
buildslaves
, connected in a star topology. The buildmaster
makes all decisions about what, when, and how to build. It sends
commands to be run on the build slaves, which simply execute the
commands and return the results. (certain steps involve more local
decision making, where the overhead of sending a lot of commands back
and forth would be inappropriate, but in general the buildmaster is
responsible for everything).
The buildmaster is usually fed Changes
by some sort of version control
system (see Change Sources), which may cause builds to be run. As the
builds are performed, various status messages are produced, which are then sent
to any registered Status Targets (see Status Targets).
The buildmaster is configured and maintained by the “buildmaster admin”, who is generally the project team member responsible for build process issues. Each buildslave is maintained by a “buildslave admin”, who do not need to be quite as involved. Generally slaves are run by anyone who has an interest in seeing the project work well on their favorite platform.
The buildslaves are typically run on a variety of separate machines, at least one per platform of interest. These machines connect to the buildmaster over a TCP connection to a publically-visible port. As a result, the buildslaves can live behind a NAT box or similar firewalls, as long as they can get to buildmaster. The TCP connections are initiated by the buildslave and accepted by the buildmaster, but commands and results travel both ways within this connection. The buildmaster is always in charge, so all commands travel exclusively from the buildmaster to the buildslave.
To perform builds, the buildslaves must typically obtain source code from a CVS/SVN/etc repository. Therefore they must also be able to reach the repository. The buildmaster provides instructions for performing builds, but does not provide the source code itself.
The Buildmaster consists of several pieces:
Each Builder is configured with a list of BuildSlaves that it will use for its builds. These buildslaves are expected to behave identically: the only reason to use multiple BuildSlaves for a single Builder is to provide a measure of load-balancing.
Within a single BuildSlave, each Builder creates its own SlaveBuilder instance. These SlaveBuilders operate independently from each other. Each gets its own base directory to work in. It is quite common to have many Builders sharing the same buildslave. For example, there might be two buildslaves: one for i386, and a second for PowerPC. There may then be a pair of Builders that do a full compile/test run, one for each architecture, and a lone Builder that creates snapshot source tarballs if the full builders complete successfully. The full builders would each run on a single buildslave, whereas the tarball creation step might run on either buildslave (since the platform doesn't matter when creating source tarballs). In this case, the mapping would look like:
Builder(full-i386) -> BuildSlaves(slave-i386) Builder(full-ppc) -> BuildSlaves(slave-ppc) Builder(source-tarball) -> BuildSlaves(slave-i386, slave-ppc)
and each BuildSlave would have two SlaveBuilders inside it, one for a full builder, and a second for the source-tarball builder.
Once a SlaveBuilder is available, the Builder pulls one or more BuildRequests off its incoming queue. (It may pull more than one if it determines that it can merge the requests together; for example, there may be multiple requests to build the current HEAD revision). These requests are merged into a single Build instance, which includes the SourceStamp that describes what exact version of the source code should be used for the build. The Build is then randomly assigned to a free SlaveBuilder and the build begins.
The behaviour when BuildRequests are merged can be customized, see Merging BuildRequests.
The buildmaster maintains a central Status object, to which various status plugins are connected. Through this Status object, a full hierarchy of build status objects can be obtained.
The configuration file controls which status plugins are active. Each status plugin gets a reference to the top-level Status object. From there they can request information on each Builder, Build, Step, and LogFile. This query-on-demand interface is used by the html.Waterfall plugin to create the main status page each time a web browser hits the main URL.
The status plugins can also subscribe to hear about new Builds as they occur: this is used by the MailNotifier to create new email messages for each recently-completed Build.
The Status object records the status of old builds on disk in the buildmaster's base directory. This allows it to return information about historical builds.
There are also status objects that correspond to Schedulers and BuildSlaves. These allow status plugins to report information about upcoming builds, and the online/offline status of each buildslave.
A day in the life of the buildbot:
Buildbot is shipped in two components: the buildmaster (called buildbot
for legacy reasons) and the buildslave. The buildslave component has far fewer
requirements, and is more broadly compatible than the buildmaster. You will
need to carefully pick the environment in which to run your buildmaster, but
the buildslave should be able to run just about anywhere.
It is possible to install the buildmaster and buildslave on the same system, although for anything but the smallest installation this arrangement will not be very efficient.
At a bare minimum, you'll need the following for both the buildmaster and a buildslave:
Buildbot requires python-2.4 or later.
Both the buildmaster and the buildslaves require Twisted-8.0.x or later. As always, the most recent version is recommended.
Twisted is delivered as a collection of subpackages. You'll need at least "Twisted" (the core package), and you'll also want TwistedMail, TwistedWeb, and TwistedWords (for sending email, serving a web status page, and delivering build status via IRC, respectively). You might also want TwistedConch (for the encrypted Manhole debug port). Note that Twisted requires ZopeInterface to be installed as well.
Of course, your project's build process will impose additional requirements on the buildslaves. These hosts must have all the tools necessary to compile and test your project's source code.
Buildbot - both master and slave - runs well natively on Windows. The slave runs well on Cygwin, but because of problems with SQLite on Cygwin, the master does not.
Buildbot's windows testing is limited to the most recent Twisted and Python versions. For best results, use the most recent available versions of these libraries on Windows.
The sqlite3 package is required for python-2.5 and earlier (it is already included in python-2.5 and later, but the version in python-2.5 has nasty bugs)
The simplejson package is required for python-2.5 and earlier (it is already included as json in python-2.6 and later)
Buildbot requires Jinja version 2.1 or higher
Jinja2 is a general purpose templating language and is used by Buildbot to generate the HTML output.
Buildbot and Buildslave are installed using the standard python
distutils
process. For either component, after unpacking the tarball,
the process is:
python setup.py build python setup.py install
where the install step may need to be done as root. This will put the bulk of
the code in somewhere like /usr/lib/python2.3/site-packages/buildbot
. It
will also install the buildbot
command-line tool in
/usr/bin/buildbot
.
If the environment variable $NO_INSTALL_REQS
is set to '1', then
setup.py
will not try to install Buildbot's requirements. This is
usually only useful when building a Buildbot package.
To test this, shift to a different directory (like /tmp
), and run:
buildbot --version # or buildslave --version
If it shows you the versions of Buildbot and Twisted, the install went
ok. If it says no such command
or it gets an ImportError
when it tries to load the libaries, then something went wrong.
pydoc buildbot
is another useful diagnostic tool.
Windows users will find these files in other places. You will need to
make sure that python can find the libraries, and will probably find
it convenient to have buildbot
on your PATH.
If you wish, you can run the buildbot unit test suite like this:
PYTHONPATH=. trial buildbot.test # or PYTHONPATH=. trial buildslave.test
Nothing should fail, a few might be skipped. If any of the tests fail, you should stop and investigate the cause before continuing the installation process, as it will probably be easier to track down the bug early.
If you cannot or do not wish to install the buildbot into a site-wide location like /usr or /usr/local, you can also install it into the account's home directory or any other location using a tool like virtualenv.
As you learned earlier (see System Architecture), the buildmaster
runs on a central host (usually one that is publically visible, so
everybody can check on the status of the project), and controls all
aspects of the buildbot system. Let us call this host
buildbot.example.org
.
You may wish to create a separate user account for the buildmaster,
perhaps named buildmaster
. This can help keep your personal
configuration distinct from that of the buildmaster and is useful if
you have to use a mail-based notification system (see Change Sources). However, the Buildbot will work just fine with your regular
user account.
You need to choose a directory for the buildmaster, called the
basedir
. This directory will be owned by the buildmaster, which
will use configuration files therein, and create status files as it
runs. ~/Buildbot is a likely value. If you run multiple
buildmasters in the same account, or if you run both masters and
slaves, you may want a more distinctive name like
~/Buildbot/master/gnomovision or
~/Buildmasters/fooproject. If you are using a separate user
account, this might just be ~buildmaster/masters/fooproject.
Once you've picked a directory, use the buildbot create-master command to create the directory and populate it with startup files:
buildbot create-master -r basedir
You will need to create a configuration file (see Configuration) before starting the buildmaster. Most of the rest of this manual is dedicated to explaining how to do this. A sample configuration file is placed in the working directory, named master.cfg.sample, which can be copied to master.cfg and edited to suit your purposes.
(Internal details: This command creates a file named
buildbot.tac that contains all the state necessary to create
the buildmaster. Twisted has a tool called twistd
which can use
this .tac file to create and launch a buildmaster instance. twistd
takes care of logging and daemonization (running the program in the
background). /usr/bin/buildbot is a front end which runs twistd
for you.)
In addition to buildbot.tac, a small Makefile.sample is installed. This can be used as the basis for customized daemon startup, See Launching the daemons.
If you want to use MySQL as the database backend for your Buildbot, add the
--db
option to the create-master
invocation to specify the
connection string for the MySQL database (see Database Specification).
This section lists options to the create-master command. You can also type buildslave create-slave --help for an up-to-the-moment summary.
--force
--no-logrotate
--relocatable
--config
--log-size
--log-count
None
to keep all twistd.log files
around. The default is 10.
--db
If you have just installed a new version of the Buildbot code, and you have buildmasters that were created using an older version, you'll need to upgrade these buildmasters before you can use them. The upgrade process adds and modifies files in the buildmaster's base directory to make it compatible with the new code.
buildbot upgrade-master basedir
This command will also scan your master.cfg file for incompatibilities (by loading it and printing any errors or deprecation warnings that occur). Each buildbot release tries to be compatible with configurations that worked cleanly (i.e. without deprecation warnings) on the previous release: any functions or classes that are to be removed will first be deprecated in a release, to give you a chance to start using the replacement.
The upgrade-master
command is idempotent. It is safe to run it
multiple times. After each upgrade of the buildbot code, you should
use upgrade-master
on all your buildmasters.
In general, Buildbot slaves and masters can be upgraded independently, although some new features will not be available, depending on the master and slave versions.
The 0.7.6 release introduced the public_html/ directory, which
contains index.html and other files served by the
WebStatus
and Waterfall
status displays. The
upgrade-master
command will create these files if they do not
already exist. It will not modify existing copies, but it will write a
new copy in e.g. index.html.new if the new version differs from
the version that already exists.
Buildbot-0.8.0 introduces a database backend, which is SQLite by default. The
upgrade-master
command will automatically create and populate this
database with the changes the buildmaster has seen. Note that, as of this
release, build history is not contained in the database, and is thus not
migrated.
The upgrade process renames the Changes pickle ($basedir/changes.pck
) to
changes.pck.old
once the upgrade is complete. To reverse the upgrade,
simply downgrade Buildbot and move this file back to its original name. You
may also wish to delete the state database (state.sqlite
).
The upgrade process assumes that strings in your Changes pickle are encoded in
UTF-8 (or plain ASCII). If this is not the case, and if there are non-UTF-8
characters in the pickle, the upgrade will fail with a suitable error message.
If this occurs, you have two options. If the change history is not important
to your purpose, you can simply delete changes.pck
.
If you would like to keep the change history, then you will need to figure out
which encoding is in use, and use contrib/fix_changes_pickle_encoding.py
(see Contrib Scripts) to rewrite the changes pickle into Unicode before
upgrading the master. A typical invocation (with Mac-Roman encoding) might
look like:
$ python $buildbot/contrib/fix_changes_pickle_encoding.py changes.pck macroman decoding bytestrings in changes.pck using macroman converted 11392 strings backing up changes.pck to changes.pck.old
If your Changes pickle uses multiple encodings, you're on your own, but the script in contrib may provide a good starting point for the fix.
Typically, you will be adding a buildslave to an existing buildmaster, to provide additional architecture coverage. The buildbot administrator will give you several pieces of information necessary to connect to the buildmaster. You should also be somewhat familiar with the project being tested, so you can troubleshoot build problems locally.
The buildbot exists to make sure that the project's stated “how to build it” process actually works. To this end, the buildslave should run in an environment just like that of your regular developers. Typically the project build process is documented somewhere (README, INSTALL, etc), in a document that should mention all library dependencies and contain a basic set of build instructions. This document will be useful as you configure the host and account in which the buildslave runs.
Here's a good checklist for setting up a buildslave:
It is recommended (although not mandatory) to set up a separate user
account for the buildslave. This account is frequently named
buildbot
or buildslave
. This serves to isolate your
personal working environment from that of the slave's, and helps to
minimize the security threat posed by letting possibly-unknown
contributors run arbitrary code on your system. The account should
have a minimum of fancy init scripts.
Follow the instructions given earlier (see Installing the code).
If you use a separate buildslave account, and you didn't install the
buildbot code to a shared location, then you will need to install it
with --home=~
for each account that needs it.
Make sure the host can actually reach the buildmaster. Usually the buildmaster is running a status webserver on the same machine, so simply point your web browser at it and see if you can get there. Install whatever additional packages or libraries the project's INSTALL document advises. (or not: if your buildslave is supposed to make sure that building without optional libraries still works, then don't install those libraries).
Again, these libraries don't necessarily have to be installed to a site-wide shared location, but they must be available to your build process. Accomplishing this is usually very specific to the build process, so installing them to /usr or /usr/local is usually the best approach.
Follow the instructions in the INSTALL document, in the buildslave's account. Perform a full CVS (or whatever) checkout, configure, make, run tests, etc. Confirm that the build works without manual fussing. If it doesn't work when you do it by hand, it will be unlikely to work when the buildbot attempts to do it in an automated fashion.
This should be somewhere in the buildslave's account, typically named after the project which is being tested. The buildslave will not touch any file outside of this directory. Something like ~/Buildbot or ~/Buildslaves/fooproject is appropriate.
When the buildbot admin configures the buildmaster to accept and use your buildslave, they will provide you with the following pieces of information:
Now run the 'buildslave' command as follows:
buildslave create-slave BASEDIR MASTERHOST:PORT SLAVENAME PASSWORD
This will create the base directory and a collection of files inside,
including the buildbot.tac file that contains all the
information you passed to the buildbot
command.
When it first connects, the buildslave will send a few files up to the buildmaster which describe the host that it is running on. These files are presented on the web status display so that developers have more information to reproduce any test failures that are witnessed by the buildbot. There are sample files in the info subdirectory of the buildbot's base directory. You should edit these to correctly describe you and your host.
BASEDIR/info/admin should contain your name and email address. This is the “buildslave admin address”, and will be visible from the build status page (so you may wish to munge it a bit if address-harvesting spambots are a concern).
BASEDIR/info/host should be filled with a brief description of the host: OS, version, memory size, CPU speed, versions of relevant libraries installed, and finally the version of the buildbot code which is running the buildslave.
The optional BASEDIR/info/access_uri can specify a URI which will
connect a user to the machine. Many systems accept ssh://hostname
URIs
for this purpose.
If you run many buildslaves, you may want to create a single ~buildslave/info file and share it among all the buildslaves with symlinks.
There are a handful of options you might want to use when creating the buildslave with the buildslave create-slave <options> DIR <params> command. You can type buildslave create-slave --help for a summary. To use these, just include them on the buildslave create-slave command line, like this:
buildslave create-slave --umask=022 ~/buildslave buildmaster.example.org:42012 myslavename mypasswd
--no-logrotate
--usepty
--umask
--umask=022
to tell
the buildslave to fix the umask after twistd clobbers it. If you want
build products to be writable by other accounts too, use
--umask=000
, but this is likely to be a security problem.
--keepalive
--keepalive=120
.
If the buildslave is behind a NAT box or stateful firewall, these
messages may help to keep the connection alive: some NAT boxes tend to
forget about a connection if it has not been used in a while. When
this happens, the buildmaster will think that the buildslave has
disappeared, and builds will time out. Meanwhile the buildslave will
not realize than anything is wrong.
--maxdelay
--log-size
--log-count
None
to keep all twistd.log files
around. The default is 10.
unicode_encoding
The default value is what python's sys.getfilesystemencoding() returns, which on Windows is 'mbcs', on Mac OSX is 'utf-8', and on Unix depends on your locale settings.
If you need a different encoding, this can be changed in your build slave's buildbot.tac file by adding a unicode_encoding argument to BuildSlave:
s = BuildSlave(buildmaster_host, port, slavename, passwd, basedir, keepalive, usepty, umask=umask, maxdelay=maxdelay, unicode_encoding='utf-8')
If you have just installed a new version of Buildbot-slave, you may need to take some steps to upgrade it. If you are upgrading to version 0.8.2 or later, you can run
buildslave upgrade-slave /path/to/buildslave/dir
Before Buildbot version 0.8.1, the Buildbot master and slave were part of the same distribution. As of version 0.8.1, the buildslave is a separate distribution.
As of this release, you will need to install buildbot-slave to run a slave.
Any automatic startup scripts that had run buildbot start
for previous versions
should be changed to run buildslave start
instead.
If you are running a version later than 0.8.1, then you can skip the remainder
of this section: the upgrade-slave
command will take care of this. If
you are upgrading directly to 0.8.1, read on.
The existing buildbot.tac
for any buildslaves running older versions
will need to be edited or replaced. If the loss of cached buildslave state
(e.g., for Source steps in copy mode) is not problematic, the easiest solution
is to simply delete the slave directory and re-run buildslave
create-slave
.
If deleting the slave directory is problematic, the change to
buildbot.tac
is simple. On line 3, replace
from buildbot.slave.bot import BuildSlave
with
from buildslave.bot import BuildSlave
After this change, the buildslave should start as usual.
Both the buildmaster and the buildslave run as daemon programs. To
launch them, pass the working directory to the buildbot
and buildslave
commands, as appropriate:
# start a master buildbot start BASEDIR # start a slave buildslave start SLAVE_BASEDIR
The BASEDIR is option and can be omitted if the current directory contains the buildbot configuration (the buildbot.tac file).
buildbot start
This command will start the daemon and then return, so normally it will not produce any output. To verify that the programs are indeed running, look for a pair of files named twistd.log and twistd.pid that should be created in the working directory. twistd.pid contains the process ID of the newly-spawned daemon.
When the buildslave connects to the buildmaster, new directories will start appearing in its base directory. The buildmaster tells the slave to create a directory for each Builder which will be using that slave. All build operations are performed within these directories: CVS checkouts, compiles, and tests.
Once you get everything running, you will want to arrange for the
buildbot daemons to be started at boot time. One way is to use
cron
, by putting them in a @reboot crontab entry1:
@reboot buildbot start BASEDIR
When you run crontab to set this up, remember to do it as the buildmaster or buildslave account! If you add this to your crontab when running as your regular account (or worse yet, root), then the daemon will run as the wrong user, quite possibly as one with more authority than you intended to provide.
It is important to remember that the environment provided to cron jobs
and init scripts can be quite different that your normal runtime.
There may be fewer environment variables specified, and the PATH may
be shorter than usual. It is a good idea to test out this method of
launching the buildslave by using a cron job with a time in the near
future, with the same command, and then check twistd.log to
make sure the slave actually started correctly. Common problems here
are for /usr/local or ~/bin to not be on your
PATH
, or for PYTHONPATH
to not be set correctly.
Sometimes HOME
is messed up too.
Some distributions may include conveniences to make starting buildbot
at boot time easy. For instance, with the default buildbot package in
Debian-based distributions, you may only need to modify
/etc/default/buildbot
(see also /etc/init.d/buildbot
, which
reads the configuration in /etc/default/buildbot
).
Buildbot also comes with its own init scripts that provide support for controlling multi-slave and multi-master setups (mostly because they are based on the init script from the debian package). With a little modification these scripts can be used both on debian and rhel based distributions and may thus prove helpful to package maintainers who are working on buildbot (or those that haven't yet split buildbot into master and slave packages).
# install as /etc/default|sysconfig/buildslave master/contrib/init-scripts/buildslave.default # install /etc/default|sysconfig/buildslave master/contrib/init-scripts/buildmaster.default # install as /etc/init.d/buildslave slave/contrib/init-scripts/buildslave.init.sh # install as /etc/init.d/buildmaster slave/contrib/init-scripts/buildmaster.init.sh # ... and tell sysvinit about them chkconfig buildmaster reset # ... or update-rc.d buildmaster defaults
While a buildbot daemon runs, it emits text to a logfile, named
twistd.log. A command like tail -f twistd.log
is useful
to watch the command output as it runs.
The buildmaster will announce any errors with its configuration file in the logfile, so it is a good idea to look at the log at startup time to check for any problems. Most buildmaster activities will cause lines to be added to the log.
To stop a buildmaster or buildslave manually, use:
buildbot stop BASEDIR # or buildslave stop SLAVE_BASEDIR
This simply looks for the twistd.pid file and kills whatever process is identified within.
At system shutdown, all processes are sent a SIGKILL
. The
buildmaster and buildslave will respond to this by shutting down
normally.
The buildmaster will respond to a SIGHUP
by re-reading its
config file. Of course, this only works on unix-like systems with
signal support, and won't work on Windows. The following shortcut is
available:
buildbot reconfig BASEDIR
When you update the Buildbot code to a new release, you will need to
restart the buildmaster and/or buildslave before it can take advantage
of the new code. You can do a buildbot stop
BASEDIR and
buildbot start
BASEDIR in quick succession, or you can
use the restart
shortcut, which does both steps for you:
buildbot restart BASEDIR
Buildslaves can similarly be restarted with:
buildslave restart BASEDIR
There are certain configuration changes that are not handled cleanly
by buildbot reconfig
. If this occurs, buildbot restart
is a more robust tool to fully switch over to the new configuration.
buildbot restart
may also be used to start a stopped Buildbot
instance. This behaviour is useful when writing scripts that stop, start
and restart Buildbot.
A buildslave may also be gracefully shutdown from the see WebStatus status plugin. This is useful to shutdown a buildslave without interrupting any current builds. The buildmaster will wait until the buildslave is finished all its current builds, and will then tell the buildslave to shutdown.
The buildmaster can be configured to send out email notifications when a slave has been offline for a while. Be sure to configure the buildmaster with a contact email address for each slave so these notifications are sent to someone who can bring it back online.
If you find you can no longer provide a buildslave to the project, please let the project admins know, so they can put out a call for a replacement.
The Buildbot records status and logs output continually, each time a
build is performed. The status tends to be small, but the build logs
can become quite large. Each build and log are recorded in a separate
file, arranged hierarchically under the buildmaster's base directory.
To prevent these files from growing without bound, you should
periodically delete old build logs. A simple cron job to delete
anything older than, say, two weeks should do the job. The only trick
is to leave the buildbot.tac and other support files alone, for
which find's -mindepth
argument helps skip everything in the
top directory. You can use something like the following:
@weekly cd BASEDIR && find . -mindepth 2 i-path './public_html/*' -prune -o -type f -mtime +14 -exec rm {} \; @weekly cd BASEDIR && find twistd.log* -mtime +14 -exec rm {} \;
Alternatively, you can configure a maximum number of old logs to be kept
using the --log-count
command line option when running buildbot
create-slave
or buildbot create-master
.
Here are a few hints on diagnosing common problems.
Cron jobs are typically run with a minimal shell (/bin/sh, not
/bin/bash), and tilde expansion is not always performed in such
commands. You may want to use explicit paths, because the PATH
is usually quite short and doesn't include anything set by your
shell's startup scripts (.profile, .bashrc, etc). If
you've installed buildbot (or other python libraries) to an unusual
location, you may need to add a PYTHONPATH
specification (note
that python will do tilde-expansion on PYTHONPATH
elements by
itself). Sometimes it is safer to fully-specify everything:
@reboot PYTHONPATH=~/lib/python /usr/local/bin/buildbot start /usr/home/buildbot/basedir
Take the time to get the @reboot job set up. Otherwise, things will work fine for a while, but the first power outage or system reboot you have will stop the buildslave with nothing but the cries of sorrowful developers to remind you that it has gone away.
If the buildslave cannot connect to the buildmaster, the reason should be described in the twistd.log logfile. Some common problems are an incorrect master hostname or port number, or a mistyped bot name or password. If the buildslave loses the connection to the master, it is supposed to attempt to reconnect with an exponentially-increasing backoff. Each attempt (and the time of the next attempt) will be logged. If you get impatient, just manually stop and re-start the buildslave.
When the buildmaster is restarted, all slaves will be disconnected, and will
attempt to reconnect as usual. The reconnect time will depend upon how long the
buildmaster is offline (i.e. how far up the exponential backoff curve the
slaves have travelled). Again, buildslave restart
BASEDIR will
speed up the process.
While some features of Buildbot are included in the distribution, others are
only available in contrib/
in the source directory. The latest versions
of such scripts are available at
http://github.com/buildbot/buildbot/tree/master/master/contrib.
This chapter defines some of the basic concepts that the Buildbot uses. You'll need to understand how the Buildbot sees the world to configure it properly.
These source trees come from a Version Control System of some kind.
CVS and Subversion are two popular ones, but the Buildbot supports
others. All VC systems have some notion of an upstream
repository
which acts as a server2, from which clients
can obtain source trees according to various parameters. The VC
repository provides source trees of various projects, for different
branches, and from various points in time. The first thing we have to
do is to specify which source tree we want to get.
For the purposes of the Buildbot, we will try to generalize all VC systems as having repositories that each provide sources for a variety of projects. Each project is defined as a directory tree with source files. The individual files may each have revisions, but we ignore that and treat the project as a whole as having a set of revisions (CVS is really the only VC system still in widespread use that has per-file revisions.. everything modern has moved to atomic tree-wide changesets). Each time someone commits a change to the project, a new revision becomes available. These revisions can be described by a tuple with two items: the first is a branch tag, and the second is some kind of revision stamp or timestamp. Complex projects may have multiple branch tags, but there is always a default branch. The timestamp may be an actual timestamp (such as the -D option to CVS), or it may be a monotonically-increasing transaction number (such as the change number used by SVN and P4, or the revision number used by Bazaar, or a labeled tag used in CVS)3. The SHA1 revision ID used by Mercurial and Git is also a kind of revision stamp, in that it specifies a unique copy of the source tree, as does a Darcs “context” file.
When we aren't intending to make any changes to the sources we check out (at least not any that need to be committed back upstream), there are two basic ways to use a VC system:
Build personnel or CM staff typically use the first approach: the build that results is (ideally) completely specified by the two parameters given to the VC system: repository and revision tag. This gives QA and end-users something concrete to point at when reporting bugs. Release engineers are also reportedly fond of shipping code that can be traced back to a concise revision tag of some sort.
Developers are more likely to use the second approach: each morning the developer does an update to pull in the changes committed by the team over the last day. These builds are not easy to fully specify: it depends upon exactly when you did a checkout, and upon what local changes the developer has in their tree. Developers do not normally tag each build they produce, because there is usually significant overhead involved in creating these tags. Recreating the trees used by one of these builds can be a challenge. Some VC systems may provide implicit tags (like a revision number), while others may allow the use of timestamps to mean “the state of the tree at time X” as opposed to a tree-state that has been explicitly marked.
The Buildbot is designed to help developers, so it usually works in terms of the latest sources as opposed to specific tagged revisions. However, it would really prefer to build from reproducible source trees, so implicit revisions are used whenever possible.
So for the Buildbot's purposes we treat each VC system as a server which can take a list of specifications as input and produce a source tree as output. Some of these specifications are static: they are attributes of the builder and do not change over time. Others are more variable: each build will have a different value. The repository is changed over time by a sequence of Changes, each of which represents a single developer making changes to some set of files. These Changes are cumulative.
For normal builds, the Buildbot wants to get well-defined source trees that contain specific Changes, and exclude other Changes that may have occurred after the desired ones. We assume that the Changes arrive at the buildbot (through one of the mechanisms described in see Change Sources) in the same order in which they are committed to the repository. The Buildbot waits for the tree to become “stable” before initiating a build, for two reasons. The first is that developers frequently make multiple related commits in quick succession, even when the VC system provides ways to make atomic transactions involving multiple files at the same time. Running a build in the middle of these sets of changes would use an inconsistent set of source files, and is likely to fail (and is certain to be less useful than a build which uses the full set of changes). The tree-stable-timer is intended to avoid these useless builds that include some of the developer's changes but not all. The second reason is that some VC systems (i.e. CVS) do not provide repository-wide transaction numbers, so that timestamps are the only way to refer to a specific repository state. These timestamps may be somewhat ambiguous, due to processing and notification delays. By waiting until the tree has been stable for, say, 10 minutes, we can choose a timestamp from the middle of that period to use for our source checkout, and then be reasonably sure that any clock-skew errors will not cause the build to be performed on an inconsistent set of source files.
The Schedulers always use the tree-stable-timer, with a timeout that is configured to reflect a reasonable tradeoff between build latency and change frequency. When the VC system provides coherent repository-wide revision markers (such as Subversion's revision numbers, or in fact anything other than CVS's timestamps), the resulting Build is simply performed against a source tree defined by that revision marker. When the VC system does not provide this, a timestamp from the middle of the tree-stable period is used to generate the source tree4.
For CVS, the static specifications are repository
and
module
. In addition to those, each build uses a timestamp (or
omits the timestamp to mean the latest
) and branch tag
(which defaults to HEAD). These parameters collectively specify a set
of sources from which a build may be performed.
Subversion combines the
repository, module, and branch into a single Subversion URL
parameter. Within that scope, source checkouts can be specified by a
numeric revision number
(a repository-wide
monotonically-increasing marker, such that each transaction that
changes the repository is indexed by a different revision number), or
a revision timestamp. When branches are used, the repository and
module form a static baseURL
, while each build has a
revision number
and a branch
(which defaults to a
statically-specified defaultBranch
). The baseURL
and
branch
are simply concatenated together to derive the
svnurl
to use for the checkout.
Perforce is similar. The server
is specified through a P4PORT
parameter. Module and branch
are specified in a single depot path, and revisions are
depot-wide. When branches are used, the p4base
and
defaultBranch
are concatenated together to produce the depot
path.
Bzr (which is a descendant of Arch/Bazaar, and is frequently referred to as “Bazaar”) has the same sort of repository-vs-workspace model as Arch, but the repository data can either be stored inside the working directory or kept elsewhere (either on the same machine or on an entirely different machine). For the purposes of Buildbot (which never commits changes), the repository is specified with a URL and a revision number.
The most common way to obtain read-only access to a bzr tree is via
HTTP, simply by making the repository visible through a web server
like Apache. Bzr can also use FTP and SFTP servers, if the buildslave
process has sufficient privileges to access them. Higher performance
can be obtained by running a special Bazaar-specific server. None of
these matter to the buildbot: the repository URL just has to match the
kind of server being used. The repoURL
argument provides the
location of the repository.
Branches are expressed as subdirectories of the main central
repository, which means that if branches are being used, the BZR step
is given a baseURL
and defaultBranch
instead of getting
the repoURL
argument.
Darcs doesn't really have the
notion of a single master repository. Nor does it really have
branches. In Darcs, each working directory is also a repository, and
there are operations to push and pull patches from one of these
repositories
to another. For the Buildbot's purposes, all you
need to do is specify the URL of a repository that you want to build
from. The build slave will then pull the latest patches from that
repository and build them. Multiple branches are implemented by using
multiple repositories (possibly living on the same server).
Builders which use Darcs therefore have a static repourl
which
specifies the location of the repository. If branches are being used,
the source Step is instead configured with a baseURL
and a
defaultBranch
, and the two strings are simply concatenated
together to obtain the repository's URL. Each build then has a
specific branch which replaces defaultBranch
, or just uses the
default one. Instead of a revision number, each build can have a
“context”, which is a string that records all the patches that are
present in a given tree (this is the output of darcs changes
--context, and is considerably less concise than, e.g. Subversion's
revision number, but the patch-reordering flexibility of Darcs makes
it impossible to provide a shorter useful specification).
Mercurial is like Darcs, in that
each branch is stored in a separate repository. The repourl
,
baseURL
, and defaultBranch
arguments are all handled the
same way as with Darcs. The “revision”, however, is the hash
identifier returned by hg identify.
Git also follows a decentralized model, and
each repository can have several branches and tags. The source Step is
configured with a static repourl
which specifies the location
of the repository. In addition, an optional branch
parameter
can be specified to check out code from a specific branch instead of
the default “master” branch. The “revision” is specified as a SHA1
hash as returned by e.g. git rev-parse. No attempt is made
to ensure that the specified revision is actually a subset of the
specified branch.
Each Change has a who
attribute, which specifies which developer is
responsible for the change. This is a string which comes from a namespace
controlled by the VC repository. Frequently this means it is a username on the
host which runs the repository, but not all VC systems require this. Each
StatusNotifier will map the who
attribute into something appropriate for
their particular means of communication: an email address, an IRC handle, etc.
It also has a list of files
, which are just the tree-relative
filenames of any files that were added, deleted, or modified for this
Change. These filenames are used by the fileIsImportant
function (in the Scheduler) to decide whether it is worth triggering a
new build or not, e.g. the function could use the following function
to only run a build if a C file were checked in:
def has_C_files(change): for name in change.files: if name.endswith(".c"): return True return False
Certain BuildSteps can also use the list of changed files
to run a more targeted series of tests, e.g. the
python_twisted.Trial
step can run just the unit tests that
provide coverage for the modified .py files instead of running the
full test suite.
The Change also has a comments
attribute, which is a string
containing any checkin comments.
A change's project, by default the empty string, describes the source code that changed. It is a free-form string which the buildbot administrator can use to flexibly discriminate among changes.
Generally, a project is an independently-buildable unit of source. This field can be used to apply different build steps to different projects. For example, an open-source application might build its Windows client from a separate codebase than its POSIX server. In this case, the change sources should be configured to attach an appropriate project string (say, "win-client" and "server") to changes from each codebase. Schedulers would then examine these strings and trigger the appropriate builders for each project.
A change occurs within the context of a specific repository. This is generally specified with a string, and for most version-control systems, this string takes the form of a URL.
Changes can be filtered on repository, but more often this field is used as a hint for the build steps to figure out which code to check out.
Each Change can have a revision
attribute, which describes how
to get a tree with a specific state: a tree which includes this Change
(and all that came before it) but none that come after it. If this
information is unavailable, the .revision
attribute will be
None
. These revisions are provided by the ChangeSource, and
consumed by the computeSourceRevision
method in the appropriate
source.Source
class.
revision
is an int, seconds since the epoch
revision
is an int, the changeset number (r%d)
revision
is a large string, the output of darcs changes --context
revision
is a short string (a hash ID), the output of hg identify
revision
is an int, the transaction number
revision
is a short string (a SHA1 hash), the output of e.g.
git rev-parse
The Change might also have a branch
attribute. This indicates
that all of the Change's files are in the same named branch. The
Schedulers get to decide whether the branch should be built or not.
For VC systems like CVS, and Git, the branch
name is unrelated to the filename. (that is, the branch name and the
filename inhabit unrelated namespaces). For SVN, branches are
expressed as subdirectories of the repository, so the file's
“svnurl” is a combination of some base URL, the branch name, and the
filename within the branch. (In a sense, the branch name and the
filename inhabit the same namespace). Darcs branches are
subdirectories of a base URL just like SVN. Mercurial branches are the
same as Darcs.
A Change may have one or more properties attached to it, usually specified through the Force Build form or see sendchange. Properties are discussed in detail in the see Build Properties section.
Finally, the Change might have a links
list, which is intended
to provide a list of URLs to a viewcvs-style web page that
provides more detail for this Change, perhaps including the full file
diffs.
Each Buildmaster has a set of Scheduler
objects, each of which
gets a copy of every incoming Change. The Schedulers are responsible
for deciding when Builds should be run. Some Buildbot installations
might have a single Scheduler, while others may have several, each for
a different purpose.
For example, a “quick” scheduler might exist to give immediate
feedback to developers, hoping to catch obvious problems in the code
that can be detected quickly. These typically do not run the full test
suite, nor do they run on a wide variety of platforms. They also
usually do a VC update rather than performing a brand-new checkout
each time. You could have a “quick” scheduler which used a 30 second
timeout, and feeds a single “quick” Builder that uses a VC
mode='update'
setting.
A separate “full” scheduler would run more comprehensive tests a
little while later, to catch more subtle problems. This scheduler
would have a longer tree-stable-timer, maybe 30 minutes, and would
feed multiple Builders (with a mode=
of 'copy'
,
'clobber'
, or 'export'
).
The tree-stable-timer
and fileIsImportant
decisions are
made by the Scheduler. Dependencies are also implemented here.
Periodic builds (those which are run every N seconds rather than after
new Changes arrive) are triggered by a special Periodic
Scheduler subclass. The default Scheduler class can also be told to
watch for specific branches, ignoring Changes on other branches. This
may be useful if you have a trunk and a few release branches which
should be tracked, but when you don't want to have the Buildbot pay
attention to several dozen private user branches.
When the setup has multiple sources of Changes the category
can be used for Scheduler
objects to filter out a subset
of the Changes. Note that not all change sources can attach a category.
Some Schedulers may trigger builds for other reasons, other than recent Changes. For example, a Scheduler subclass could connect to a remote buildmaster and watch for builds of a library to succeed before triggering a local build that uses that library.
Each Scheduler creates and submits BuildSet
objects to the
BuildMaster
, which is then responsible for making sure the
individual BuildRequests
are delivered to the target
Builders
.
Scheduler
instances are activated by placing them in the
c['schedulers']
list in the buildmaster config file. Each
Scheduler has a unique name.
A BuildSet
is the name given to a set of Builds that all
compile/test the same version of the tree on multiple Builders. In
general, all these component Builds will perform the same sequence of
Steps, using the same source code, but on different platforms or
against a different set of libraries.
The BuildSet
is tracked as a single unit, which fails if any of
the component Builds have failed, and therefore can succeed only if
all of the component Builds have succeeded. There are two kinds
of status notification messages that can be emitted for a BuildSet:
the firstFailure
type (which fires as soon as we know the
BuildSet will fail), and the Finished
type (which fires once
the BuildSet has completely finished, regardless of whether the
overall set passed or failed).
A BuildSet
is created with a source stamp tuple of
(branch, revision, changes, patch), some of which may be None, and a
list of Builders on which it is to be run. They are then given to the
BuildMaster, which is responsible for creating a separate
BuildRequest
for each Builder.
There are a couple of different likely values for the
SourceStamp
:
(revision=None, changes=[CHANGES], patch=None)
SourceStamp
used when a series of Changes have
triggered a build. The VC step will attempt to check out a tree that
contains CHANGES (and any changes that occurred before CHANGES, but
not any that occurred after them).
(revision=None, changes=None, patch=None)
SourceStamp
that would be used on a Build that was
triggered by a user request, or a Periodic scheduler. It is also
possible to configure the VC Source Step to always check out the
latest sources rather than paying attention to the Changes in the
SourceStamp, which will result in same behavior as this.
(branch=BRANCH, revision=None, changes=None, patch=None)
(revision=REV, changes=None, patch=(LEVEL, DIFF, SUBDIR_ROOT))
patch -pLEVEL <DIFF
) from inside the relative
directory SUBDIR_ROOT. Item SUBDIR_ROOT is optional and defaults to the
builder working directory. The try feature uses this kind of
SourceStamp
. If patch
is None, the patching step is
bypassed.
The buildmaster is responsible for turning the BuildSet
into a
set of BuildRequest
objects and queueing them on the
appropriate Builders.
A BuildRequest
is a request to build a specific set of sources
on a single specific Builder
. Each Builder
runs the
BuildRequest
as soon as it can (i.e. when an associated
buildslave becomes free). BuildRequest
s are prioritized from
oldest to newest, so when a buildslave becomes free, the
Builder
with the oldest BuildRequest
is run.
The BuildRequest
contains the SourceStamp
specification.
The actual process of running the build (the series of Steps that will
be executed) is implemented by the Build
object. In this future
this might be changed, to have the Build
define what
gets built, and a separate BuildProcess
(provided by the
Builder) to define how it gets built.
The BuildRequest
may be mergeable with other compatible
BuildRequest
s. Builds that are triggered by incoming Changes
will generally be mergeable. Builds that are triggered by user
requests are generally not, unless they are multiple requests to build
the latest sources of the same branch.
The Buildmaster runs a collection of Builders, each of which handles a single type of build (e.g. full versus quick), on one or more build slaves. Builders serve as a kind of queue for a particular type of build. Each Builder gets a separate column in the waterfall display. In general, each Builder runs independently (although various kinds of interlocks can cause one Builder to have an effect on another).
Each builder is a long-lived object which controls a sequence of Builds. Each Builder is created when the config file is first parsed, and lives forever (or rather until it is removed from the config file). It mediates the connections to the buildslaves that do all the work, and is responsible for creating the Build objects - see Build.
Each builder gets a unique name, and the path name of a directory where it gets to do all its work (there is a buildmaster-side directory for keeping status information, as well as a buildslave-side directory where the actual checkout/compile/test commands are executed).
A builder also has a BuildFactory, which is responsible for creating new Build instances: because the Build instance is what actually performs each build, choosing the BuildFactory is the way to specify what happens each time a build is done (see Build).
Each builder is associated with one of more BuildSlaves. A builder which is used to perform Mac OS X builds (as opposed to Linux or Solaris builds) should naturally be associated with a Mac buildslave.
If multiple buildslaves are available for any given builder, you will have some measure of redundancy: in case one slave goes offline, the others can still keep the Builder working. In addition, multiple buildslaves will allow multiple simultaneous builds for the same Builder, which might be useful if you have a lot of forced or “try” builds taking place.
If you use this feature, it is important to make sure that the buildslaves are all, in fact, capable of running the given build. The slave hosts should be configured similarly, otherwise you will spend a lot of time trying (unsuccessfully) to reproduce a failure that only occurs on some of the buildslaves and not the others. Different platforms, operating systems, versions of major programs or libraries, all these things mean you should use separate Builders.
A build is a single compile or test run of a particular version of the source code, and is comprised of a series of steps. It is ultimately up to you what constitutes a build, but for compiled software it is generally the checkout, configure, make, and make check sequence. For interpreted projects like Python modules, a build is generally a checkout followed by an invocation of the bundled test suite.
The builder which creates a build specifies its steps via a BuildFactory; the build then executes the steps and records their results.
Buildbot has a somewhat limited awareness of users. It assumes the world consists of a set of developers, each of whom can be described by a couple of simple attributes. These developers make changes to the source code, causing builds which may succeed or fail.
Each developer is primarily known through the source control system. Each
Change object that arrives is tagged with a who
field that
typically gives the account name (on the repository machine) of the user
responsible for that change. This string is the primary key by which the
User is known, and is displayed on the HTML status pages and in each Build's
“blamelist”.
To do more with the User than just refer to them, this username needs to be mapped into an address of some sort. The responsibility for this mapping is left up to the status module which needs the address. The core code knows nothing about email addresses or IRC nicknames, just user names.
Each change has a single user who is responsible for it. Most builds have a set of changes: the build generally represents the first time these changes have been built and tested by the Buildbot. The build has a “blamelist” that is the union of the users responsible for all the build's changes.
The build provides a list of users who are interested in the build – the “interested users”. Usually this is equal to the blamelist, but may also be expanded, e.g., to include the current build sherrif or a module's maintainer.
If desired, the buildbot can notify the interested users until the problem is resolved.
The buildbot.status.mail.MailNotifier
class
(see MailNotifier) provides a status target which can send email
about the results of each build. It accepts a static list of email
addresses to which each message should be delivered, but it can also
be configured to send mail to the Build's Interested Users. To do
this, it needs a way to convert User names into email addresses.
For many VC systems, the User Name is actually an account name on the system which hosts the repository. As such, turning the name into an email address is a simple matter of appending “@repositoryhost.com”. Some projects use other kinds of mappings (for example the preferred email address may be at “project.org” despite the repository host being named “cvs.project.org”), and some VC systems have full separation between the concept of a user and that of an account on the repository host (like Perforce). Some systems (like Git) put a full contact email address in every change.
To convert these names to addresses, the MailNotifier uses an EmailLookup
object. This provides a .getAddress method which accepts a name and
(eventually) returns an address. The default MailNotifier
module provides an EmailLookup which simply appends a static string,
configurable when the notifier is created. To create more complex behaviors
(perhaps using an LDAP lookup, or using “finger” on a central host to
determine a preferred address for the developer), provide a different object
as the lookup
argument.
In the future, when the Problem mechanism has been set up, the Buildbot will need to send mail to arbitrary Users. It will do this by locating a MailNotifier-like object among all the buildmaster's status targets, and asking it to send messages to various Users. This means the User-to-address mapping only has to be set up once, in your MailNotifier, and every email message the buildbot emits will take advantage of it.
Like MailNotifier, the buildbot.status.words.IRC
class
provides a status target which can announce the results of each build. It
also provides an interactive interface by responding to online queries
posted in the channel or sent as private messages.
In the future, the buildbot can be configured map User names to IRC
nicknames, to watch for the recent presence of these nicknames, and to
deliver build status messages to the interested parties. Like
MailNotifier
does for email addresses, the IRC
object
will have an IRCLookup
which is responsible for nicknames. The
mapping can be set up statically, or it can be updated by online users
themselves (by claiming a username with some kind of “buildbot: i am
user warner” commands).
Once the mapping is established, the rest of the buildbot can ask the
IRC
object to send messages to various users. It can report on
the likelihood that the user saw the given message (based upon how long the
user has been inactive on the channel), which might prompt the Problem
Hassler logic to send them an email message instead.
The Buildbot also offers a desktop status client interface which can display real-time build status in a GUI panel on the developer's desktop.
Each build has a set of “Build Properties”, which can be used by its BuildSteps to modify their actions. These properties, in the form of key-value pairs, provide a general framework for dynamically altering the behavior of a build based on its circumstances.
Properties come from a number of places:
Properties are very flexible, and can be used to implement all manner of functionality. Here are some examples:
Most Source steps record the revision that they checked out in
the got_revision
property. A later step could use this
property to specify the name of a fully-built tarball, dropped in an
easily-acessible directory for later testing.
Some projects want to perform nightly builds as well as bulding in response to
committed changes. Such a project would run two schedulers, both pointing to
the same set of builders, but could provide an is_nightly
property so
that steps can distinguish the nightly builds, perhaps to run more
resource-intensive tests.
Some projects have different build processes on different systems. Rather than create a build factory for each slave, the steps can use buildslave properties to identify the unique aspects of each slave and adapt the build process dynamically.
The buildbot's behavior is defined by the “config file”, which
normally lives in the master.cfg file in the buildmaster's base
directory (but this can be changed with an option to the
buildbot create-master
command). This file completely specifies
which Builders are to be run, which slaves they should use, how
Changes should be tracked, and where the status information is to be
sent. The buildmaster's buildbot.tac file names the base
directory; everything else comes from the config file.
A sample config file was installed for you when you created the buildmaster, but you will need to edit it before your buildbot will do anything useful.
This chapter gives an overview of the format of this file and the various sections in it. You will need to read the later chapters to understand how to fill in each section properly.
The config file is, fundamentally, just a piece of Python code which
defines a dictionary named BuildmasterConfig
, with a number of
keys that are treated specially. You don't need to know Python to do
basic configuration, though, you can just copy the syntax of the
sample file. If you are comfortable writing Python code,
however, you can use all the power of a full programming language to
achieve more complicated configurations.
The BuildmasterConfig
name is the only one which matters: all
other names defined during the execution of the file are discarded.
When parsing the config file, the Buildmaster generally compares the
old configuration with the new one and performs the minimum set of
actions necessary to bring the buildbot up to date: Builders which are
not changed are left untouched, and Builders which are modified get to
keep their old event history.
The beginning of the master.cfg file typically starts with something like:
BuildmasterConfig = c = {}
Therefore a config key of change_source
will usually appear in
master.cfg as c['change_source']
.
See Configuration Index for a full list of BuildMasterConfig
keys.
Python comments start with a hash character (“#”), tuples are defined with
(parenthesis, pairs)
, and lists (arrays) are defined with [square,
brackets]
. Tuples and lists are mostly interchangeable. Dictionaries (data
structures which map “keys” to “values”) are defined with curly braces:
{'key1': 'value1', 'key2': 'value2'}
. Function calls (and object
instantiation) can use named parameters, like w =
html.Waterfall(http_port=8010)
.
The config file starts with a series of import
statements, which make
various kinds of Steps and Status targets available for later use. The main
BuildmasterConfig
dictionary is created, then it is populated with a
variety of keys, described section-by-section in subsequent chapters.
The following symbols are automatically available for use in the configuration file.
basedir
os.path.expanduser(os.path.join(basedir, 'master.cfg'))
The config file is only read at specific points in time. It is first read when the buildmaster is launched. Once it is running, there are various ways to ask it to reload the config file.
If you are on the system hosting the buildmaster, you can send a SIGHUP
signal to it: the buildbot tool has a shortcut for this:
buildbot reconfig BASEDIR
This command will show you all of the lines from twistd.log that relate to the reconfiguration. If there are any problems during the config-file reload, they will be displayed in these lines.
Buildbot's reconfiguration system is fragile for a few difficult-to-fix reasons:
The debug tool (buildbot debugclient --master HOST:PORT
) has a
“Reload .cfg” button which will also trigger a reload. In the
future, there will be other ways to accomplish this step (probably a
password-protected button on the web page, as well as a privileged IRC
command).
When reloading the config file, the buildmaster will endeavor to change as little as possible about the running system. For example, although old status targets may be shut down and new ones started up, any status targets that were not changed since the last time the config file was read will be left running and untouched. Likewise any Builders which have not been changed will be left running. If a Builder is modified (say, the build process is changed) while a Build is currently running, that Build will keep running with the old process until it completes. Any previously queued Builds (or Builds which get queued after the reconfig) will use the new process.
To verify that the config file is well-formed and contains no deprecated or invalid elements, use the “checkconfig” command, passing it either a master directory or a config file.
% buildbot checkconfig master.cfg Config file is good! # or % buildbot checkconfig /tmp/masterdir Config file is good!
If the config file has deprecated features (perhaps because you've upgraded the buildmaster and need to update the config file to match), they will be announced by checkconfig. In this case, the config file will work, but you should really remove the deprecated items and use the recommended replacements instead:
% buildbot checkconfig master.cfg /usr/lib/python2.4/site-packages/buildbot/master.py:559: DeprecationWarning: c['sources'] is deprecated as of 0.7.6 and will be removed by 0.8.0 . Please use c['change_source'] instead. warnings.warn(m, DeprecationWarning) Config file is good!
If the config file is simply broken, that will be caught too:
% buildbot checkconfig master.cfg Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/buildbot/scripts/runner.py", line 834, in doCheckConfig ConfigLoader(configFile) File "/usr/lib/python2.4/site-packages/buildbot/scripts/checkconfig.py", line 31, in __init__ self.loadConfig(configFile) File "/usr/lib/python2.4/site-packages/buildbot/master.py", line 480, in loadConfig exec f in localDict File "/home/warner/BuildBot/master/foolscap/master.cfg", line 90, in ? c[bogus] = "stuff" NameError: name 'bogus' is not defined
The keys in this section affect the operations of the buildmaster globally.
Buildbot requires a connection to a database to maintain certain state
information, such as tracking pending build requests. By default this is
stored in a sqlite file called 'state.sqlite' in the base directory of your
master. This can be overridden with the db_url
parameter.
This parameter is of the form:
driver://[username:password@]host:port/database[?args]
For sqlite databases, since there is no host and port, relative paths are
specified with sqlite:///
and absolute paths with sqlite:////
c['db_url'] = "sqlite:///state.sqlite" c['db_url'] = "mysql://user:pass@somehost.com/database_name?max_idle=300"
The max_idle
argument for MySQL connections should be set to something
less than the wait_timeout configured for your server. This ensures that
connections are closed and re-opened after the configured amount of idle time.
If you see errors such as _mysql_exceptions.OperationalError: (2006,
'MySQL server has gone away')
, this means your max_idle
setting is
probably too high. show global variables like 'wait_timeout';
will show
what the currently configured wait_timeout is on your MySQL server.
Normally buildbot operates using a single master process that uses the configured database to save state.
It is possible to configure buildbot to have multiple master processes that share state in the same database. This has been well tested using a MySQL database. There are several benefits of Multi-master mode:
State that is shared in the database includes:
Because of this shared state, you are strongly encouraged to:
One suggested configuration is to have one buildbot master configured with just the scheduler and change sources; and then other masters configured with just the builders.
To enable multi-master mode in this configuration, you will need to set the
multiMaster
option so that buildbot doesn't warn about missing
schedulers or builders. You will also need to set db_poll_interval
to the masters with only builders check the database for new build requests
at the configured interval.
# Enable multiMaster mode; disables warnings about unknown builders and # schedulers c['multiMaster'] = True # Check for new build requests every 60 seconds c['db_poll_interval'] = 60
There are a couple of basic settings that you use to tell the buildbot what it is working on. This information is used by status reporters to let users find out more about the codebase being exercised by this particular Buildbot installation.
Note that these parameters were added long before Buildbot became able to build multiple projects in a single buildmaster, and thus assume that there is only one project. While the configuration parameter names may be confusing, a suitable choice of name and URL should help users avoid any confusion.
c['projectName'] = "Buildbot" c['projectURL'] = "http://buildbot.sourceforge.net/" c['buildbotURL'] = "http://localhost:8010/"
projectName
is a short string that will be used to describe the
project that this buildbot is working on. For example, it is used as
the title of the waterfall HTML page.
projectURL
is a string that gives a URL for the project as a
whole. HTML status displays will show projectName
as a link to
projectURL
, to provide a link from buildbot HTML pages to your
project's home page.
The buildbotURL
string should point to the location where the buildbot's
internal web server is visible. This typically uses the port number set when
you create the Waterfall
object: the buildbot needs your help to figure
out a suitable externally-visible host name.
When status notices are sent to users (either by email or over IRC),
buildbotURL
will be used to create a URL to the specific build
or problem that they are being notified about. It will also be made
available to queriers (over IRC) who want to find out where to get
more information about this buildbot.
c['logCompressionLimit'] = 16384 c['logCompressionMethod'] = 'gz' c['logMaxSize'] = 1024*1024 # 1M c['logMaxTailSize'] = 32768
The logCompressionLimit
enables compression of build logs on
disk for logs that are bigger than the given size, or disables that
completely if set to False
. The default value is 4k, which should
be a reasonable default on most file systems. This setting has no impact
on status plugins, and merely affects the required disk space on the
master for build logs.
The logCompressionMethod
controls what type of compression is used for
build logs. The default is 'bz2', the other valid option is 'gz'. 'bz2'
offers better compression at the expense of more CPU time.
The logMaxSize
parameter sets an upper limit (in bytes) to how large
logs from an individual build step can be. The default value is None, meaning
no upper limit to the log size. Any output exceeding logMaxSize
will be
truncated, and a message to this effect will be added to the log's HEADER
channel.
If logMaxSize
is set, and the output from a step exceeds the maximum,
the logMaxTailSize
parameter controls how much of the end of the build
log will be kept. The effect of setting this parameter is that the log will
contain the first logMaxSize
bytes and the last logMaxTailSize
bytes of output. Don't set this value too high, as the the tail of the log is
kept in memory.
c['changeHorizon'] = 200 c['buildHorizon'] = 100 c['eventHorizon'] = 50 c['logHorizon'] = 40 c['buildCacheSize'] = 15 c['changeCacheSize'] = 10000
Buildbot stores historical information on disk in the form of "Pickle" files and compressed logfiles. In a large installation, these can quickly consume disk space, yet in many cases developers never consult this historical information.
The c['changeHorizon']
key determines how many changes the master will
keep a record of. One place these changes are displayed is on the waterfall
page. This parameter defaults to 0, which means keep all changes indefinitely.
The buildHorizon
specifies the minimum number of builds for each builder
which should be kept on disk. The eventHorizon
specifies the minumum
number of events to keep – events mostly describe connections and
disconnections of slaves, and are seldom helpful to developers. The
logHorizon
gives the minimum number of builds for which logs should be
maintained; this parameter must be less than buildHorizon
. Builds older
than logHorizon
but not older than buildHorizon
will maintain
their overall status and the status of each step, but the logfiles will be
deleted.
The buildCacheSize
gives the number of builds for each builder
which are cached in memory. This number should be larger than the number of
builds required for commonly-used status displays (the waterfall or grid
views), so that those displays do not miss the cache on a refresh.
Finally, the changeCacheSize
gives the number of changes to cache in
memory. This should be larger than the number of changes that typically arrive
in the span of a few minutes, otherwise your schedulers will be reloading
changes from the database every time they run. For distributed version control
systems, like git or hg, several thousand changes may arrive at once, so
setting changeCacheSize
to something like 10,000 isn't unreasonable.
By default, buildbot merges BuildRequests that have compatible SourceStamps.
This can be disabled for any particular Builder by passing
mergeRequests=False
to the BuilderConfig definition, see Builders.
For example:
c['builders'] = [ BuilderConfig(name='test-i386', slavename='bot-i386', builddir='test-i386', factory=f, mergeRequests=False), ]
For more precise control, this behaviour can be customized with the
buildmaster's c['mergeRequests']
configuration key. This key
specifies a function which is called with three arguments: a
Builder
and two BuildRequest
objects. It should return
true if the requests can be merged. For example:
def mergeRequests(builder, req1, req2): """Don't merge buildrequest at all""" return False c['mergeRequests'] = mergeRequests
In many cases, the details of the SourceStamps and BuildRequests are important. In this example, only BuildRequests with the same "reason" are merged; thus developers forcing builds for different reasons will see distinct builds.
def mergeRequests(builder, req1, req2): if req1.source.canBeMergedWith(req2.source) and req1.reason == req2.reason: return True return False c['mergeRequests'] = mergeRequests
By default, buildbot will attempt to start builds on builders in order from the
builder with the oldest pending request to the newest. This behaviour can be
customized with the c['prioritizeBuilders']
configuration key.
This key specifies a function which is called with two arguments: a
BuildMaster
and a list of Builder
objects. It
should return a list of Builder
objects in the desired order.
It may also remove items from the list if builds should not be started
on those builders.
This parameter controls the order in which builders are activated. It does not affect the order in which a builder processes the build requests in its queue. For that purpose, see see Prioritizing Builds.
def prioritizeBuilders(buildmaster, builders): """Prioritize builders. 'finalRelease' builds have the highest priority, so they should be built before running tests, or creating builds.""" builderPriorities = { "finalRelease": 0, "test": 1, "build": 2, } builders.sort(key=lambda b: builderPriorities.get(b.name, 0)) return builders c['prioritizeBuilders'] = prioritizeBuilders
c['slavePortnum'] = 10000
The buildmaster will listen on a TCP port of your choosing for connections from buildslaves. It can also use this port for connections from remote Change Sources, status clients, and debug tools. This port should be visible to the outside world, and you'll need to tell your buildslave admins about your choice.
It does not matter which port you pick, as long it is externally visible, however you should probably use something larger than 1024, since most operating systems don't allow non-root processes to bind to low-numbered ports. If your buildmaster is behind a firewall or a NAT box of some sort, you may have to configure your firewall to permit inbound connections to this port.
c['slavePortnum']
is a strports specification string,
defined in the twisted.application.strports
module (try
pydoc twisted.application.strports to get documentation on
the format). This means that you can have the buildmaster listen on a
localhost-only port by doing:
c['slavePortnum'] = "tcp:10000:interface=127.0.0.1"
This might be useful if you only run buildslaves on the same machine,
and they are all configured to contact the buildmaster at
localhost:10000
.
The 'properties'
configuration key defines a dictionary
of properties that will be available to all builds started by the
buildmaster:
c['properties'] = { 'Widget-version' : '1.2', 'release-stage' : 'alpha' }
If you set c['debugPassword']
, then you can connect to the
buildmaster with the diagnostic tool launched by buildbot
debugclient MASTER:PORT
. From this tool, you can reload the config
file, manually force builds, and inject changes, which may be useful
for testing your buildmaster without actually commiting changes to
your repository (or before you have the Change Sources set up). The
debug tool uses the same port number as the slaves do:
c['slavePortnum']
, and is authenticated with this password.
c['debugPassword'] = "debugpassword"
If you set c['manhole']
to an instance of one of the classes in
buildbot.manhole
, you can telnet or ssh into the buildmaster
and get an interactive Python shell, which may be useful for debugging
buildbot internals. It is probably only useful for buildbot
developers. It exposes full access to the buildmaster's account
(including the ability to modify and delete files), so it should not
be enabled with a weak or easily guessable password.
There are three separate Manhole
classes. Two of them use SSH,
one uses unencrypted telnet. Two of them use a username+password
combination to grant access, one of them uses an SSH-style
authorized_keys file which contains a list of ssh public keys.
manhole.AuthorizedKeysManhole
manhole.PasswordManhole
manhole.TelnetManhole
# some examples: from buildbot import manhole c['manhole'] = manhole.AuthorizedKeysManhole(1234, "authorized_keys") c['manhole'] = manhole.PasswordManhole(1234, "alice", "mysecretpassword") c['manhole'] = manhole.TelnetManhole(1234, "bob", "snoop_my_password_please")
The Manhole
instance can be configured to listen on a specific
port. You may wish to have this listening port bind to the loopback
interface (sometimes known as “lo0”, “localhost”, or 127.0.0.1) to
restrict access to clients which are running on the same host.
from buildbot.manhole import PasswordManhole c['manhole'] = PasswordManhole("tcp:9999:interface=127.0.0.1","admin","passwd")
To have the Manhole
listen on all interfaces, use
"tcp:9999"
or simply 9999. This port specification uses
twisted.application.strports
, so you can make it listen on SSL
or even UNIX-domain sockets if you want.
Note that using any Manhole requires that the TwistedConch package be installed, and that you be using Twisted version 2.0 or later.
The buildmaster's SSH server will use a different host key than the normal sshd running on a typical unix host. This will cause the ssh client to complain about a “host key mismatch”, because it does not realize there are two separate servers running on the same host. To avoid this, use a clause like the following in your .ssh/config file:
Host remotehost-buildbot HostName remotehost HostKeyAlias remotehost-buildbot Port 9999 # use 'user' if you use PasswordManhole and your name is not 'admin'. # if you use AuthorizedKeysManhole, this probably doesn't matter. User admin
A Version Control System mantains a source tree, and tells the buildmaster when it changes. The first step of each Build is typically to acquire a copy of some version of this tree.
This chapter describes how the Buildbot learns about what Changes have occurred. For more information on VC systems and Changes, see Version Control Systems.
Changes can be provided by a variety of ChangeSource types, although any given project will typically have only a single ChangeSource active. This section provides a description of all available ChangeSource types and explains how to set up each of them.
In general, each Buildmaster watches a single source tree. It is possible to work around this, but true support for multi-tree builds remains elusive.
There are a variety of ChangeSources available, some of which are meant to be used in conjunction with other tools to deliver Change events from the VC repository to the buildmaster.
As a quick guide, here is a list of VC systems and the ChangeSources that might
be useful with them. All of these ChangeSources are in the
buildbot.changes
module. The contrib/
scripts mentioned below
are available on github - see Contrib Scripts.
NOTE: The CVS change sources FreshCVSSource, FCMaildirSource, BonsaiMaildirSource and SyncmailMaildirSource are flagged as depreciated. If you are still using them, please send mail to the buildbot list.
CVS
contrib/buildbot_cvs_mail.py
script)
buildbot
sendchange
run in a loginfo script)
contrib/viewcvspoll.py
polling process which examines the ViewCVS
database directly
status.web.change_hook
)
SVN
contrib/svn_buildbot.py
run in a postcommit script)
contrib/svn_watcher.py
or contrib/svnpoller.py
polling
process
status.web.change_hook
)
contrib/googlecode_atom.py
's GoogleCodeAtomPoller (polling the commit feed for a GoogleCode SVN repository)
Darcs
contrib/darcs_buildbot.py
in a commit script
status.web.change_hook
)
Mercurial
contrib/hg_buildbot.py
run in an 'incoming' hook)
buildbot/changes/hgbuildbot.py
run as an in-process 'changegroup'
hook)
status.web.change_hook
)
contrib/googlecode_atom.py
's GoogleCodeAtomPoller (polling the commit feed for a GoogleCode Mercurial repository)
Bzr (the newer Bazaar)
contrib/bzr_buildbot.py
run in a post-change-branch-tip or commit hook)
contrib/bzr_buildbot.py
's BzrPoller (polling the Bzr repository)
status.web.change_hook
)
Git
contrib/git_buildbot.py
run in the post-receive hook)
status.web.change_hook
)
gitpoller.GitPoller
(polling a remote git repository)
All VC systems can be driven by a PBChangeSource, and the
buildbot sendchange
tool run from some form of commit script.
If you write an email parsing function, they can also all be driven by
a suitable MaildirSource
. Additionally, handlers for web-based
notification (i.e. from github) can be used with WebStatus' change_hook
module. The interface is simple, so adding your own handlers (and sharing!)
should be a breeze.
The master.cfg
configuration file has a dictionary key named
BuildmasterConfig['change_source']
, which holds the active
IChangeSource
object. The config file will typically create an
object from one of the classes described below and stuff it into this
key.
Each buildmaster typically has just a single ChangeSource, since it is
only watching a single source tree. But if, for some reason, you need
multiple sources, just set c['change_source']
to a list of
ChangeSources.
s = FreshCVSSourceNewcred(host="host", port=4519, user="alice", passwd="secret", prefix="Twisted") c['change_source'] = [s]
Each source tree has a nominal top
. Each Change has a list of
filenames, which are all relative to this top location. The
ChangeSource is responsible for doing whatever is necessary to
accomplish this. Most sources have a prefix
argument: a partial
pathname which is stripped from the front of all filenames provided to
that ChangeSource
. Files which are outside this sub-tree are
ignored by the changesource: it does not generate Changes for those
files.
ChangeSources will, in general, automatically provide the proper 'repository' attribute for any changes they produce. For systems which operate on URL-like specifiers, this is a repository URL. Other ChangeSources adapt the concept as necessary.
Many ChangeSources allow you to specify a project, as well. This attribute is useful when building from several distinct codebases in the same buildmaster: the project string can serve to differentiate the different codebases. Schedulers can filter on project, so you can configure different builders to run for each project.
The CVSToys package provides a
server which runs on the machine that hosts the CVS repository it
watches. It has a variety of ways to distribute commit notifications,
and offers a flexible regexp-based way to filter out uninteresting
changes. One of the notification options is named PBService
and
works by listening on a TCP port for clients. These clients subscribe
to hear about commit notifications.
The buildmaster has a CVSToys-compatible PBService
client built in.
There are two versions of it, one for old versions of CVSToys (1.0.9 and
earlier, FreshCVSSourceOldcred
) which used the oldcred
authentication framework, and one for newer versions (1.0.10 and later) which
use newcred
. Both are classes in the buildbot.changes.freshcvs
package.
FreshCVSSource
objects are created with the following
parameters:
host
and port
’user
and passwd
’freshcvs
). These must match the server's values, which are
defined in the freshCfg
configuration file (which lives in the
CVSROOT directory of the repository).
prefix
’To set up the freshCVS server, add a statement like the following to your freshCfg file:
pb = ConfigurationSet([ (None, None, None, PBService(userpass=('foo', 'bar'), port=4519)), ])
This will announce all changes to a client which connects to port 4519 using a username of 'foo' and a password of 'bar'.
Then add a clause like this to your buildmaster's master.cfg:
c['change_source'] = FreshCVSSource("cvs.example.com", 4519, "foo", "bar", prefix="glib/")
where "cvs.example.com" is the host that is running the FreshCVS daemon, and "glib" is the top-level directory (relative to the repository's root) where all your source code lives. Most projects keep one or more projects in the same repository (along with CVSROOT/ to hold admin files like loginfo and freshCfg); the prefix= argument tells the buildmaster to ignore everything outside that directory, and to strip that common prefix from all pathnames it handles.
Many projects publish information about changes to their source tree by sending an email message out to a mailing list, frequently named PROJECT-commits or PROJECT-changes. Each message usually contains a description of the change (who made the change, which files were affected) and sometimes a copy of the diff. Humans can subscribe to this list to stay informed about what's happening to the source tree.
The Buildbot can also be subscribed to a -commits mailing list, and can trigger builds in response to Changes that it hears about. The buildmaster admin needs to arrange for these email messages to arrive in a place where the buildmaster can find them, and configure the buildmaster to parse the messages correctly. Once that is in place, the email parser will create Change objects and deliver them to the Schedulers (see see Schedulers) just like any other ChangeSource.
There are two components to setting up an email-based ChangeSource. The first is to route the email messages to the buildmaster, which is done by dropping them into a “maildir”. The second is to actually parse the messages, which is highly dependent upon the tool that was used to create them. Each VC system has a collection of favorite change-emailing tools, and each has a slightly different format, so each has a different parsing function. There is a separate ChangeSource variant for each parsing function.
Once you've chosen a maildir location and a parsing function, create
the change source and put it in c['change_source']
:
from buildbot.changes.mail import SyncmailMaildirSource c['change_source'] = SyncmailMaildirSource("~/maildir-buildbot", prefix="/trunk/")
The recommended way to install the buildbot is to create a dedicated account for the buildmaster. If you do this, the account will probably have a distinct email address (perhaps buildmaster@example.org). Then just arrange for this account's email to be delivered to a suitable maildir (described in the next section).
If the buildbot does not have its own account, “extension addresses”
can be used to distinguish between email intended for the buildmaster
and email intended for the rest of the account. In most modern MTAs,
the e.g. foo@example.org
account has control over every email
address at example.org which begins with "foo", such that email
addressed to account-foo@example.org can be delivered to a
different destination than account-bar@example.org. qmail
does this by using separate .qmail files for the two destinations
(.qmail-foo and .qmail-bar, with .qmail
controlling the base address and .qmail-default controlling all
other extensions). Other MTAs have similar mechanisms.
Thus you can assign an extension address like foo-buildmaster@example.org to the buildmaster, and retain foo@example.org for your own use.
A “maildir” is a simple directory structure originally developed for qmail that allows safe atomic update without locking. Create a base directory with three subdirectories: “new”, “tmp”, and “cur”. When messages arrive, they are put into a uniquely-named file (using pids, timestamps, and random numbers) in “tmp”. When the file is complete, it is atomically renamed into “new”. Eventually the buildmaster notices the file in “new”, reads and parses the contents, then moves it into “cur”. A cronjob can be used to delete files in “cur” at leisure.
Maildirs are frequently created with the maildirmake tool, but a simple mkdir -p ~/MAILDIR/{cur,new,tmp} is pretty much equivalent.
Many modern MTAs can deliver directly to maildirs. The usual .forward
or .procmailrc syntax is to name the base directory with a trailing
slash, so something like ~/MAILDIR/
. qmail and postfix are
maildir-capable MTAs, and procmail is a maildir-capable MDA (Mail
Delivery Agent).
Here is an example procmail config, located in ~/.procmailrc
# .procmailrc # routes incoming mail to appropriate mailboxes PATH=/usr/bin:/usr/local/bin MAILDIR=$HOME/Mail LOGFILE=.procmail_log SHELL=/bin/sh :0 * new
If procmail is not setup on a system wide basis, then the following one-line .forward file will invoke it.
!/usr/bin/procmail
For MTAs which cannot put files into maildirs directly, the safecat tool can be executed from a .forward file to accomplish the same thing.
The Buildmaster uses the linux DNotify facility to receive immediate notification when the maildir's “new” directory has changed. When this facility is not available, it polls the directory for new messages, every 10 seconds by default.
The second component to setting up an email-based ChangeSource is to parse the actual notices. This is highly dependent upon the VC system and commit script in use.
A couple of common tools used to create these change emails, along with the buildbot tools to parse them, are:
NOTE: The CVS change sources FCMaildirSource, BonsaiMaildirSource and SyncmailMaildirSource are flagged as depreciated. If you are still using them, please send mail to the buildbot list.
The following sections describe the parsers available for each of these tools.
Most of these parsers accept a prefix=
argument, which is used
to limit the set of files that the buildmaster pays attention to. This
is most useful for systems like CVS and SVN which put multiple
projects in a single repository (or use repository names to indicate
branches). Each filename that appears in the email is tested against
the prefix: if the filename does not start with the prefix, the file
is ignored. If the filename does start with the prefix, that
prefix is stripped from the filename before any further processing is
done. Thus the prefix usually ends with a slash.
This parser works with the buildbot_cvs_maildir.py script in the contrib directory.
The script sends an email containing all the files submitted in one directory. It is invoked by using the CVSROOT/loginfo facility.
The Buildbot's CVSMaildirSource
knows how to parse
these messages and turn them into Change objects. It takes two parameters,
the directory name of the maildir root, and an optional function to create
a URL for each file. The function takes three parameters:
file - file name oldRev - old revision of the file newRev - new revision of the file
It must return, oldly enough, a url for the file in question. For example:
def fileToUrl( file, oldRev, newRev ): return 'http://example.com/cgi-bin/cvsweb.cgi/' + file + '?rev=' + newRev from buildbot.changes.mail import CVSMaildirSource c['change_source'] = CVSMaildirSource("/home/buildbot/Mail", urlmaker=fileToUrl)
CVS must be configured to invoke the buildbot_cvs_mail.py script when files are checked in. This is done via the CVS loginfo configuration file.
To update this, first do:
cvs checkout CVSROOT
cd to the CVSROOT directory and edit the file loginfo, adding a line like:
SomeModule /cvsroot/CVSROOT/buildbot_cvs_mail.py --cvsroot :ext:example.com:/cvsroot -e buildbot -P SomeModule %{sVv}
NOTE: For cvs version 1.12.x, the '--path %p
' option is required.
Version 1.11.x and 1.12.x report the directory path differently.
The above example you put the buildbot_cvs_mail.py script under /cvsroot/CVSROOT.
It can be anywhere. Run the script with –help to see all the options.
At the very least, the
options -e
(email) and -P
(project) should be specified. The line must end with %{sVv}
This is expanded to the files that were modified.
Additional entries can be added to support more modules.
The following is an abreviated form of buildbot_cvs_mail.py –help
Usage: buildbot-cvs-mail [options] %{sVv} Where options are: --category=category -C Catagory for change. This becomes the Change.cagatory attribute. This may not make sense to specify it here, as category is meant to distinguish the diffrent types of bots inside a same project, such as "test", "docs", "full" --cvsroot=<path> -c CVSROOT for use by buildbot slaves to checkout code. This becomes the Change.repository attribute. Exmaple: :ext:myhost:/cvsroot --email=email -e email Email address of the buildbot. --fromhost=hostname -f hostname The hostname that email messages appear to be coming from. The From: header of the outgoing message will look like user@hostname. By default, hostname is the machine's fully qualified domain name. --help / -h Print this text. -m hostname --mailhost=hostname The hostname of an available SMTP server. The default is 'localhost'. --mailport=port The port number of SMTP server. The default is '25'. --quiet / -q Don't print as much status to stdout. --path=path -p path The path for the files in this update. This comes from the %p parameter in loginfo for CVS version 1.12.x. Do not use this for CVS version 1.11.x --project=project -P project The project for the source. Use the CVS module being modified. This becomes the Change.project attribute. -R ADDR --reply-to=ADDR Add a "Reply-To: ADDR" header to the email message. -t --testing Construct message and send to stdout for testing The rest of the command line arguments are: %{sVv} CVS %{sVv} loginfo expansion. When invoked by CVS, this will be a single string containing the files that are changing.
See Depreciated.
http://twistedmatrix.com/users/acapnotic/wares/code/CVSToys/
This parser works with the CVSToys MailNotification
action,
which will send email to a list of recipients for each commit. This
tends to work better than using /bin/mail
from within the
CVSROOT/loginfo file directly, as CVSToys will batch together all
files changed during the same CVS invocation, and can provide more
information (like creating a ViewCVS URL for each file changed).
The Buildbot's FCMaildirSource
knows for to parse these CVSToys
messages and turn them into Change objects. It can be given two
parameters: the directory name of the maildir root, and the prefix to
strip.
from buildbot.changes.mail import FCMaildirSource c['change_source'] = FCMaildirSource("~/maildir-buildbot")
See Depreciated.
http://sourceforge.net/projects/cvs-syncmail
SyncmailMaildirSource
knows how to parse the message format used by
the CVS “syncmail” script.
from buildbot.changes.mail import SyncmailMaildirSource c['change_source'] = SyncmailMaildirSource("~/maildir-buildbot")
See Depreciated.
http://www.mozilla.org/bonsai.html
BonsaiMaildirSource
parses messages sent out by Bonsai, the CVS
tree-management system built by Mozilla.
from buildbot.changes.mail import BonsaiMaildirSource c['change_source'] = BonsaiMaildirSource("~/maildir-buildbot")
SVNCommitEmailMaildirSource
parses message sent out by the
commit-email.pl
script, which is included in the Subversion
distribution.
It does not currently handle branches: all of the Change objects that it creates will be associated with the default (i.e. trunk) branch.
from buildbot.changes.mail import SVNCommitEmailMaildirSource c['change_source'] = SVNCommitEmailMaildirSource("~/maildir-buildbot")
BzrLaunchpadEmailMaildirSource
parses the mails that are sent to
addresses that subscribe to branch revision notifications for a bzr branch
hosted on Launchpad.
The branch name defaults to lp:<Launchpad path>
. For example
lp:~maria-captains/maria/5.1
.
If only a single branch is used, the default branch name can be changed by
setting defaultBranch
.
For multiple branches, pass a dictionary as the value of the branchMap
option to map specific repository paths to specific branch names (see example
below). The leading lp:
prefix of the path is optional.
The prefix
option is not supported (it is silently ignored). Use the
branchMap
and defaultBranch
instead to assign changes to
branches (and just do not subscribe the buildbot to branches that are not of
interest).
The revision number is obtained from the email text. The bzr revision id is
not available in the mails sent by Launchpad. However, it is possible to set
the bzr append_revisions_only
option for public shared repositories to
avoid new pushes of merges changing the meaning of old revision numbers.
from buildbot.changes.mail import BzrLaunchpadEmailMaildirSource bm = { 'lp:~maria-captains/maria/5.1' : '5.1', 'lp:~maria-captains/maria/6.0' : '6.0' } c['change_source'] = BzrLaunchpadEmailMaildirSource("~/maildir-buildbot", branchMap = bm)
The last kind of ChangeSource actually listens on a TCP port for
clients to connect and push change notices into the
Buildmaster. This is used by the built-in buildbot sendchange
notification tool, as well as the VC-specific
contrib/svn_buildbot.py, contrib/arch_buildbot.py,
contrib/hg_buildbot.py tools, and the
buildbot.changes.hgbuildbot
hook. These tools are run by the
repository (in a commit hook script), and connect to the buildmaster
directly each time a file is comitted. This is also useful for
creating new kinds of change sources that work on a push
model
instead of some kind of subscription scheme, for example a script
which is run out of an email .forward file.
This ChangeSource can be configured to listen on its own TCP port, or
it can share the port that the buildmaster is already using for the
buildslaves to connect. (This is possible because the
PBChangeSource
uses the same protocol as the buildslaves, and
they can be distinguished by the username
attribute used when
the initial connection is established). It might be useful to have it
listen on a different port if, for example, you wanted to establish
different firewall rules for that port. You could allow only the SVN
repository machine access to the PBChangeSource
port, while
allowing only the buildslave machines access to the slave port. Or you
could just expose one port and run everything over it. Note:
this feature is not yet implemented, the PBChangeSource will always
share the slave port and will always have a user
name of
change
, and a passwd of changepw
. These limitations will
be removed in the future..
The PBChangeSource
is created with the following arguments. All
are optional.
port
’None
(which is the default), it
shares the port used for buildslave connections. Not
Implemented, always set to None
.
user
and passwd
’change
and changepw
. Not
Implemented, user
is currently always set to change
,
passwd
is always set to changepw
.
prefix
’This is useful for changes coming from version control systems that represent branches as parent directories within the repository (like SVN and Perforce). Use a prefix of 'trunk/' or 'project/branches/foobranch/' to only follow one branch and to get correct tree-relative filenames. Without a prefix, the PBChangeSource will probably deliver Changes with filenames like trunk/foo.c instead of just foo.c. Of course this also depends upon the tool sending the Changes in (like buildbot sendchange) and what filenames it is delivering: that tool may be filtering and stripping prefixes at the sending end.
The following hooks are useful for sending changes to a PBChangeSource:
Since Mercurial is written in python, the hook script can invoke
Buildbot's sendchange
function directly, rather than having to
spawn an external process. This function delivers the same sort of
changes as buildbot sendchange
and the various hook scripts in
contrib/
, so you'll need to add a pb.PBChangeSource
to your
buildmaster to receive these changes.
To set this up, first choose a Mercurial repository that represents
your central “official” source tree. This will be the same
repository that your buildslaves will eventually pull from. Install
Buildbot on the machine that hosts this repository, using the same
version of python as Mercurial is using (so that the Mercurial hook
can import code from buildbot). Then add the following to the
.hg/hgrc
file in that repository, replacing the buildmaster
hostname/portnumber as appropriate for your buildbot:
[hooks] changegroup.buildbot = python:buildbot.changes.hgbuildbot.hook [hgbuildbot] master = buildmaster.example.org:9987
(Note that Mercurial lets you define multiple changegroup
hooks
by giving them distinct names, like changegroup.foo
and
changegroup.bar
, which is why we use
changegroup.buildbot
in this example. There is nothing magical
about the “buildbot” suffix in the hook name. The
[hgbuildbot]
section is special, however, as it is the
only section that the buildbot hook pays attention to.)
Also note that this runs as a changegroup
hook, rather than as
an incoming
hook. The changegroup
hook is run with
multiple revisions at a time (say, if multiple revisions are being
pushed to this repository in a single hg push command),
whereas the incoming
hook is run with just one revision at a
time. The hgbuildbot.hook
function will only work with the
changegroup
hook.
The [hgbuildbot]
section has two other parameters that you
might specify, both of which control the name of the branch that is
attached to the changes coming from this hook.
One common branch naming policy for Mercurial repositories is to use
it just like Darcs: each branch goes into a separate repository, and
all the branches for a single project share a common parent directory.
For example, you might have /var/repos/PROJECT/trunk/ and
/var/repos/PROJECT/release. To use this style, use the
branchtype = dirname
setting, which simply uses the last
component of the repository's enclosing directory as the branch name:
[hgbuildbot] master = buildmaster.example.org:9987 branchtype = dirname
Another approach is to use Mercurial's built-in branches (the kind
created with hg branch and listed with hg
branches). This feature associates persistent names with particular
lines of descent within a single repository. (note that the buildbot
source.Mercurial
checkout step does not yet support this kind
of branch). To have the commit hook deliver this sort of branch name
with the Change object, use branchtype = inrepo
:
[hgbuildbot] master = buildmaster.example.org:9987 branchtype = inrepo
Finally, if you want to simply specify the branchname directly, for
all changes, use branch = BRANCHNAME
. This overrides
branchtype
:
[hgbuildbot] master = buildmaster.example.org:9987 branch = trunk
If you use branch=
like this, you'll need to put a separate
.hgrc in each repository. If you use branchtype=
, you may be
able to use the same .hgrc for all your repositories, stored in
~/.hgrc or /etc/mercurial/hgrc.
As twisted needs to hook some Signals, and that some web server are
strictly forbiding that, the parameter fork
in the
[hgbuildbot]
section will instruct mercurial to fork before
sending the change request. Then as the created process will be of short
life, it is considered as safe to disable the signal restriction in
the Apache setting like that WSGIRestrictSignal Off
. Refer to the
documentation of your web server for other way to do the same.
The category
parameter sets the category for any changes generated from
the hook. Likewise, the project
parameter sets the project. Changes'
repository
attributes are formed from the Mercurial repo path by
stripping strip
slashes.
Bzr is also written in Python, and the Bzr hook depends on Twisted to send the changes.
To install, put contrib/bzr_buildbot.py
(see Contrib Scripts) in one
of your plugins locations a bzr plugins directory (e.g.,
~/.bazaar/plugins
). Then, in one of your bazaar conf files (e.g.,
~/.bazaar/locations.conf
), set the location you want to connect with
buildbot with these keys:
buildbot_on
buildbot_server
buildbot_port
buildbot_pqm
buildbot_dry_run
buildbot_send_branch_name
When buildbot no longer has a hardcoded password, it will be a configuration option here as well.
Here's a simple example that you might have in your
~/.bazaar/locations.conf
.
[chroot-*:///var/local/myrepo/mybranch] buildbot_on = change buildbot_server = localhost
The P4Source
periodically polls a Perforce depot for changes. It accepts the following arguments:
p4base
’p4port
’p4user
’p4passwd
’p4bin
’split_file
’p4base
, to a
(branch, filename) tuple. The default just returns (None, branchfile),
which effectively disables branch support. You should supply a function
which understands your repository structure.
pollinterval
’histmax
’This configuration uses the P4PORT
, P4USER
, and P4PASSWD
specified in the buildmaster's environment. It watches a project in which the
branch name is simply the next path component, and the file is all path
components after.
from buildbot.changes import p4poller s = p4poller.P4Source(p4base='//depot/project/', split_file=lambda branchfile: branchfile.split('/',1), ) c['change_source'] = s
The BonsaiPoller
periodically polls a Bonsai server. This is a
CGI script accessed through a web server that provides information
about a CVS tree, for example the Mozilla bonsai server at
http://bonsai.mozilla.org. Bonsai servers are usable by both
humans and machines. In this case, the buildbot's change source forms
a query which asks about any files in the specified branch which have
changed since the last query.
Please take a look at the BonsaiPoller docstring for details about the arguments it accepts.
The buildbot.changes.svnpoller.SVNPoller
is a ChangeSource
which periodically polls a Subversion repository for new revisions, by running the svn
log
command in a subshell. It can watch a single branch or multiple
branches.
SVNPoller
accepts the following arguments:
svnurl
svn://svn.twistedmatrix.com/svn/Twisted/trunk
, or
http://divmod.org/svn/Divmod/
, or even
file:///home/svn/Repository/ProjectA/branches/1.5/
. This must
include the access scheme, the location of the repository (both the
hostname for remote ones, and any additional directory names necessary
to get to the repository), and the sub-path within the repository's
virtual filesystem for the project and branch of interest.
The SVNPoller
will only pay attention to files inside the
subdirectory specified by the complete svnurl.
split_file
SVNPoller
. This function must accept a single string and return
a two-entry tuple. There are a few utility functions in
buildbot.changes.svnpoller
that can be used as a
split_file
function; see below for details.
The default value always returns (None, path), which indicates that all files are on the trunk.
Subclasses of SVNPoller
can override the split_file
method instead of using the split_file=
argument.
project
SVNPoller
.
This will then be set in any changes generated by the SVNPoller
,
and can be used in a Change Filter for triggering particular builders.
svnuser
--user
argument will
be added to all svn
commands. Use this if you have to
authenticate to the svn server before you can do svn info
or
svn log
commands.
svnpasswd
svnuser
, this will cause a --password
argument to
be passed to all svn commands.
pollinterval
histmax
SVNPoller
asks for the last HISTMAX changes and
looks through them for any ones it does not already know about. If
more than HISTMAX revisions have been committed since the last poll,
older changes will be silently ignored. Larger values of histmax will
cause more time and memory to be consumed on each poll attempt.
histmax
defaults to 100.
svnbin
svn
executable to use. If subversion is
installed in a weird place on your system (outside of the
buildmaster's $PATH
), use this to tell SVNPoller
where
to find it. The default value of “svn” will almost always be
sufficient.
revlinktmpl
'http://myserver/websvn/revision.php?rev=%s'
could be used to cause revision links to be created to a websvn repository
viewer.
cachepath
Each source file that is tracked by a Subversion repository has a
fully-qualified SVN URL in the following form:
(REPOURL)(PROJECT-plus-BRANCH)(FILEPATH). When you create the
SVNPoller
, you give it a svnurl
value that includes all
of the REPOURL and possibly some portion of the PROJECT-plus-BRANCH
string. The SVNPoller
is responsible for producing Changes that
contain a branch name and a FILEPATH (which is relative to the top of
a checked-out tree). The details of how these strings are split up
depend upon how your repository names its branches.
One common layout is to have all the various projects that share a repository get a single top-level directory each. Then under a given project's directory, you get two subdirectories, one named “trunk” and another named “branches”. Under “branches” you have a bunch of other directories, one per branch, with names like “1.5.x” and “testing”. It is also common to see directories like “tags” and “releases” next to “branches” and “trunk”.
For example, the Twisted project has a subversion server on “svn.twistedmatrix.com” that hosts several sub-projects. The repository is available through a SCHEME of “svn:”. The primary sub-project is Twisted, of course, with a repository root of “svn://svn.twistedmatrix.com/svn/Twisted”. Another sub-project is Informant, with a root of “svn://svn.twistedmatrix.com/svn/Informant”, etc. Inside any checked-out Twisted tree, there is a file named bin/trial (which is used to run unit test suites).
The trunk for Twisted is in
“svn://svn.twistedmatrix.com/svn/Twisted/trunk”, and the
fully-qualified SVN URL for the trunk version of trial
would be
“svn://svn.twistedmatrix.com/svn/Twisted/trunk/bin/trial”. The same
SVNURL for that file on a branch named “1.5.x” would be
“svn://svn.twistedmatrix.com/svn/Twisted/branches/1.5.x/bin/trial”.
To set up a SVNPoller
that watches the Twisted trunk (and
nothing else), we would use the following:
from buildbot.changes.svnpoller import SVNPoller c['change_source'] = SVNPoller("svn://svn.twistedmatrix.com/svn/Twisted/trunk")
In this case, every Change that our SVNPoller
produces will
have .branch=None
, to indicate that the Change is on the trunk.
No other sub-projects or branches will be tracked.
If we want our ChangeSource to follow multiple branches, we have to do
two things. First we have to change our svnurl=
argument to
watch more than just “.../Twisted/trunk”. We will set it to
“.../Twisted” so that we'll see both the trunk and all the branches.
Second, we have to tell SVNPoller
how to split the
(PROJECT-plus-BRANCH)(FILEPATH) strings it gets from the repository
out into (BRANCH) and (FILEPATH) pairs.
We do the latter by providing a “split_file” function. This function
is responsible for splitting something like
“branches/1.5.x/bin/trial” into branch
=”branches/1.5.x” and
filepath
=”bin/trial”. This function is always given a string
that names a file relative to the subdirectory pointed to by the
SVNPoller
's svnurl=
argument. It is expected to return a
(BRANCHNAME, FILEPATH) tuple (in which FILEPATH is relative to the
branch indicated), or None to indicate that the file is outside any
project of interest.
(note that we want to see “branches/1.5.x” rather than just “1.5.x” because when we perform the SVN checkout, we will probably append the branch name to the baseURL, which requires that we keep the “branches” component in there. Other VC schemes use a different approach towards branches and may not require this artifact.)
If your repository uses this same PROJECT/BRANCH/FILEPATH naming scheme, the following function will work:
def split_file_branches(path): pieces = path.split('/') if pieces[0] == 'trunk': return (None, '/'.join(pieces[1:])) elif pieces[0] == 'branches': return ('/'.join(pieces[0:2]), '/'.join(pieces[2:])) else: return None
This function is provided as
buildbot.changes.svnpoller.split_file_branches
for your
convenience. So to have our Twisted-watching SVNPoller
follow
multiple branches, we would use this:
from buildbot.changes.svnpoller import SVNPoller, split_file_branches c['change_source'] = SVNPoller("svn://svn.twistedmatrix.com/svn/Twisted", split_file=split_file_branches)
Changes for all sorts of branches (with names like “branches/1.5.x”, and None to indicate the trunk) will be delivered to the Schedulers. Each Scheduler is then free to use or ignore each branch as it sees fit.
Another common way to organize a Subversion repository is to put the branch name at the top, and the projects underneath. This is especially frequent when there are a number of related sub-projects that all get released in a group.
For example, Divmod.org hosts a project named “Nevow” as well as one named “Quotient”. In a checked-out Nevow tree there is a directory named “formless” that contains a python source file named “webform.py”. This repository is accessible via webdav (and thus uses an “http:” scheme) through the divmod.org hostname. There are many branches in this repository, and they use a (BRANCHNAME)/(PROJECT) naming policy.
The fully-qualified SVN URL for the trunk version of webform.py is
http://divmod.org/svn/Divmod/trunk/Nevow/formless/webform.py
.
You can do an svn co
with that URL and get a copy of the latest
version. The 1.5.x branch version of this file would have a URL of
http://divmod.org/svn/Divmod/branches/1.5.x/Nevow/formless/webform.py
.
The whole Nevow trunk would be checked out with
http://divmod.org/svn/Divmod/trunk/Nevow
, while the Quotient
trunk would be checked out using
http://divmod.org/svn/Divmod/trunk/Quotient
.
Now suppose we want to have an SVNPoller
that only cares about
the Nevow trunk. This case looks just like the PROJECT/BRANCH layout
described earlier:
from buildbot.changes.svnpoller import SVNPoller c['change_source'] = SVNPoller("http://divmod.org/svn/Divmod/trunk/Nevow")
But what happens when we want to track multiple Nevow branches? We
have to point our svnurl=
high enough to see all those
branches, but we also don't want to include Quotient changes (since
we're only building Nevow). To accomplish this, we must rely upon the
split_file
function to help us tell the difference between
files that belong to Nevow and those that belong to Quotient, as well
as figuring out which branch each one is on.
from buildbot.changes.svnpoller import SVNPoller c['change_source'] = SVNPoller("http://divmod.org/svn/Divmod", split_file=my_file_splitter)
The my_file_splitter
function will be called with
repository-relative pathnames like:
trunk/Nevow/formless/webform.py
formless/webform.py"
, and a branch of None
branches/1.5.x/Nevow/formless/webform.py
trunk/Quotient/setup.py
my_file_splitter
return None.
branches/1.5.x/Quotient/setup.py
The following definition for my_file_splitter
will do the job:
def my_file_splitter(path): pieces = path.split('/') if pieces[0] == 'trunk': branch = None pieces.pop(0) # remove 'trunk' elif pieces[0] == 'branches': pieces.pop(0) # remove 'branches' # grab branch name branch = 'branches/' + pieces.pop(0) else: return None # something weird projectname = pieces.pop(0) if projectname != 'Nevow': return None # wrong project return (branch, '/'.join(pieces))
If you cannot insert a Bzr hook in the server, you can use the Bzr Poller. To
use, put contrib/bzr_buildbot.py
(see Contrib Scripts) somewhere
that your buildbot configuration can import it. Even putting it in the same
directory as the master.cfg should work. Install the poller in the buildbot
configuration as with any other change source. Minimally, provide a URL that
you want to poll (bzr://, bzr+ssh://, or lp:), though make sure the buildbot
user has necessary privileges. You may also want to specify these optional
values.
poll_interval
branch_name
bzr_buildbot.py
SHORT or FULL to
get the short branch name or full branch address.
blame_merge_author
If you cannot take advantage of post-receive hooks as provided by
contrib/git_buildbot.py
for example, then you can use the GitPoller
.
The GitPoller
periodically fetches from a remote git repository and
processes any changes. It requires its own working directory for operation, which
can be specified via the workdir
property. By default a temporary directory will
be used.
The GitPoller
has only been tested with git
version
1.7.0.3
- your mileage may vary.
GitPoller
accepts the following arguments:
repourl
git@example.com:foobaz/myrepo.git
(see the git fetch
help for more info on git-url formats)
branch
'master'
workdir
<tempdir>/gitpoller_work
pollinterval
gitbin
'git'
category
GitPoller
.
This will then be set in any changes generated by the GitPoller
, and can
be used in a Change Filter for triggering particular builders.
project
GitPoller
.
This will then be set in any changes generated by the GitPoller
,
and can be used in a Change Filter for triggering particular builders.
usetimestamps
True
), or ignore it in favor of the current time (so recently processed commits appear together in the waterfall page)
from buildbot.changes.gitpoller import GitPoller c['change_source'] = GitPoller('git@example.com:foobaz/myrepo.git', branch='great_new_feature')
Buildbot already provides a web frontend, and that frontend can easily be used to receive HTTP push notifications of commits from services like github. See see Change Hooks for more information.
The GoogleCodeAtomPoller
periodically polls a Google Code Project's
commit feed for changes. Works on both SVN and Mercurial repositories. Branches
are not understood (yet). It accepts the following arguments:
feedurl
’pollinterval
’To poll the Ostinato project's commit feed every 3 hours, use -
from contrib.googlecode_atom import GoogleCodeAtomPoller poller = GoogleCodeAtomPoller( feedurl="http://code.google.com/feeds/p/ostinato/hgchanges/basic", pollinterval=10800) c['change_source'] = [ poller ]
buildbot.changes.bonsaipoller.BonsaiPoller
: BonsaiPollerbuildbot.changes.freshcvs.FreshCVSSource
: CVSToys - PBServicebuildbot.changes.freshcvs.FreshCVSSourceOldcred
: CVSToys - PBServicebuildbot.changes.gitpoller.GitPoller
: GitPollerbuildbot.changes.mail.BonsaiMaildirSource
: BonsaiMaildirSourcebuildbot.changes.mail.BzrLaunchpadEmailMaildirSource
: BzrLaunchpadEmailMaildirSourcebuildbot.changes.mail.CVSMaildirSource
: CVSMaildirSourcebuildbot.changes.mail.FCMaildirSource
: FCMaildirSourcebuildbot.changes.mail.SVNCommitEmailMaildirSource
: SVNCommitEmailMaildirSourcebuildbot.changes.mail.SyncmailMaildirSource
: SyncmailMaildirSourcebuildbot.changes.p4poller.P4Source
: P4Sourcebuildbot.changes.pb.PBChangeSource
: PBChangeSourcebuildbot.changes.svnpoller.SVNPoller
: SVNPollerSchedulers are responsible for initiating builds on builders.
Some schedulers listen for changes from ChangeSources and generate build sets in response to these changes. Others generate build sets without changes, based on other events in the buildmaster.
c['schedulers']
is a list of Scheduler instances, each
of which causes builds to be started on a particular set of
Builders. The two basic Scheduler classes you are likely to start
with are Scheduler
and Periodic
, but you can write a
customized subclass to implement more complicated build scheduling.
Scheduler arguments should always be specified by name (as keyword arguments), to allow for future expansion:
sched = Scheduler(name="quick", builderNames=['lin', 'win'])
All schedulers have several arguments in common:
name
scheduler
.
builderNames
properties
owner
property may be of
particular interest, as its contents (as a list) will be added to the list of
"interested users" (see Doing Things With Users) for each triggered build.
For example:
sched = Scheduler(..., properties = { 'owner' : [ 'zorro@company.com', 'silver@company.com' ] })
The remaining subsections represent a catalog of the available Scheduler types.
All these Schedulers are defined in modules under buildbot.schedulers
,
and the docstrings there are the best source of documentation on the arguments
taken by each one.
Several schedulers perform filtering on an incoming set of changes. The filter can most generically be specified as a ChangeFilter. Set up a ChangeFilter like this:
from buildbot.schedulers.filter import ChangeFilter my_filter = ChangeFilter( project_re="^baseproduct/.*", branch="devel")
and then add it to a scheduler with the change_filter
parameter:
sch = SomeSchedulerClass(..., change_filter=my_filter)
There are four attributes of changes on which you can filter:
None
.
For each attribute, the filter can look for a single, specific value:
my_filter = ChangeFilter(project = 'myproject')
or accept any of a set of values:
my_filter = ChangeFilter(project = ['myproject', 'jimsproject'])
It can apply a regular expression, use the attribute name with a suffix of
_re
:
my_filter = ChangeFilter(category_re = '.*deve.*') # or, to use regular expression flags: import re my_filter = ChangeFilter(category_re = re.compile('.*deve.*', re.I))
For anything more complicated, define a Python function to recognize the strings you want:
def my_branch_fn(branch): return branch in branches_to_build and branch not in branches_to_ignore my_filter = ChangeFilter(branch_fn = my_branch_fn)
The special argument filter_fn
can be used to specify a function that is
given the entire Change object, and returns a boolean.
A Change passes the filter only if all arguments are satisfied. If no filter object is given to a scheduler, then all changes will be built (subject to any other restrictions the scheduler enforces).
This is the original and still most popular Scheduler class. It follows
exactly one branch, and starts a configurable tree-stable-timer after
each change on that branch. When the timer expires, it starts a build
on some set of Builders. The Scheduler accepts a fileIsImportant
function which can be used to ignore some Changes if they do not
affect any “important” files.
The arguments to this scheduler are:
name
builderNames
properties
treeStableTimer
If treeStableTimer is None
, then a separate build is started
immediately for each Change.
fileIsImportant
True
if the change is worth building, and False
if
it is not. Unimportant Changes are accumulated until the build is
triggered by an important change. The default value of None means
that all Changes are important.
change_filter
fileIsImportant
: if the change filter filters out a Change, then it is
completely ignored by the scheduler. If a Change is allowed by the change
filter, but is deemed unimportant, then it will not cause builds to start, but
will be remembered and shown in status displays.
categories (deprecated; use change_filter)
branch (deprecated; use change_filter)
branch
equal to
the special value of None
means it should only pay attention to the
default branch. Note that None
is a keyword, not a string, so you want
to use None
and not "None"
.
Example:
from buildbot.schedulers.basic import Scheduler quick = Scheduler(name="quick", branch=None, treeStableTimer=60, builderNames=["quick-linux", "quick-netbsd"]) full = Scheduler(name="full", branch=None, treeStableTimer=5*60, builderNames=["full-linux", "full-netbsd", "full-OSX"]) c['schedulers'] = [quick, full]
In this example, the two “quick” builders are triggered 60 seconds after the tree has been changed. The “full” builds do not run quite so quickly (they wait 5 minutes), so hopefully if the quick builds fail due to a missing file or really simple typo, the developer can discover and fix the problem before the full builds are started. Both Schedulers only pay attention to the default branch: any changes on other branches are ignored by these Schedulers. Each Scheduler triggers a different set of Builders, referenced by name.
This scheduler uses a tree-stable-timer like the default one, but uses a separate timer for each branch.
The arguments to this scheduler are:
name
builderNames
properties
treeStableTimer
fileIsImportant
True
if the change is worth building, and False
if
it is not. Unimportant Changes are accumulated until the build is
triggered by an important change. The default value of None means
that all Changes are important.
change_filter
fileIsImportant
: if the change filter filters out a Change, then it is
completely ignored by the scheduler. If a Change is allowed by the change
filter, but is deemed unimportant, then it will not cause builds to start, but
will be remembered and shown in status displays.
branches (deprecated; use change_filter)
categories (deprecated; use change_filter)
It is common to wind up with one kind of build which should only be performed if the same source code was successfully handled by some other kind of build first. An example might be a packaging step: you might only want to produce .deb or RPM packages from a tree that was known to compile successfully and pass all unit tests. You could put the packaging step in the same Build as the compile and testing steps, but there might be other reasons to not do this (in particular you might have several Builders worth of compiles/tests, but only wish to do the packaging once). Another example is if you want to skip the “full” builds after a failing “quick” build of the same source code. Or, if one Build creates a product (like a compiled library) that is used by some other Builder, you'd want to make sure the consuming Build is run after the producing one.
You can use “Dependencies” to express this relationship
to the Buildbot. There is a special kind of Scheduler named
scheduler.Dependent
that will watch an “upstream” Scheduler
for builds to complete successfully (on all of its Builders). Each time
that happens, the same source code (i.e. the same SourceStamp
)
will be used to start a new set of builds, on a different set of
Builders. This “downstream” scheduler doesn't pay attention to
Changes at all. It only pays attention to the upstream scheduler.
If the build fails on any of the Builders in the upstream set,
the downstream builds will not fire. Note that, for SourceStamps
generated by a ChangeSource, the revision
is None, meaning HEAD.
If any changes are committed between the time the upstream scheduler
begins its build and the time the dependent scheduler begins its
build, then those changes will be included in the downstream build.
See the see Triggerable Scheduler for a more flexible dependency
mechanism that can avoid this problem.
The keyword arguments to this scheduler are:
name
builderNames
properties
upstream
Example:
from buildbot.schedulers import basic tests = basic.Scheduler("just-tests", None, 5*60, ["full-linux", "full-netbsd", "full-OSX"]) package = basic.Dependent(name="build-package", upstream=tests, # <- no quotes! builderNames=["make-tarball", "make-deb", "make-rpm"]) c['schedulers'] = [tests, package]
This simple scheduler just triggers a build every N seconds.
The arguments to this scheduler are:
name
builderNames
properties
periodicBuildTimer
Example:
from buildbot.schedulers import timed nightly = timed.Periodic(name="nightly", builderNames=["full-solaris"], periodicBuildTimer=24*60*60) c['schedulers'] = [nightly]
The Scheduler in this example just runs the full solaris build once per day. Note that this Scheduler only lets you control the time between builds, not the absolute time-of-day of each Build, so this could easily wind up a “daily” or “every afternoon” scheduler depending upon when it was first activated.
This is highly configurable periodic build scheduler, which triggers
a build at particular times of day, week, month, or year. The
configuration syntax is very similar to the well-known crontab
format, in which you provide values for minute, hour, day, and month
(some of which can be wildcards), and a build is triggered whenever
the current time matches the given constraints. This can run a build
every night, every morning, every weekend, alternate Thursdays,
on your boss's birthday, etc.
Pass some subset of minute
, hour
, dayOfMonth
,
month
, and dayOfWeek
; each may be a single number or
a list of valid values. The builds will be triggered whenever the
current time matches these values. Wildcards are represented by a
'*' string. All fields default to a wildcard except 'minute', so
with no fields this defaults to a build every hour, on the hour.
The full list of parameters is:
name
builderNames
properties
branch
minute
hour
month
dayOfWeek
onlyIfChanged
For example, the following master.cfg clause will cause a build to be started every night at 3:00am:
from buildbot.schedulers import timed s = timed.Nightly(name='nightly', builderNames=['builder1', 'builder2'], hour=3, minute=0)
This scheduler will perform a build each monday morning at 6:23am and again at 8:23am, but only if someone has committed code in the interim:
s = timed.Nightly(name='BeforeWork', builderNames=['builder1'], dayOfWeek=0, hour=[6,8], minute=23, onlyIfChanged=True)
The following runs a build every two hours, using Python's range
function:
s = timed.Nightly(name='every2hours', builderNames=['builder1'], hour=range(0, 24, 2))
Finally, this example will run only on December 24th:
s = timed.Nightly(name='SleighPreflightCheck', builderNames=['flying_circuits', 'radar'], month=12, dayOfMonth=24, hour=12, minute=0)
This scheduler allows developers to use the buildbot try
command to trigger builds of code they have not yet committed. See
try for complete details.
Two implementations are available: Try_Jobdir
and
Try_Userpass
. The former monitors a job directory, specified
by the jobdir
parameter, while the latter listens for PB
connections on a specific port
, and authenticates against
userport
.
The buildmaster must have a scheduler instance in the config file's
c['schedulers']
list to receive try requests. This lets the
administrator control who may initiate these “trial” builds, which branches
are eligible for trial builds, and which Builders should be used for them.
The scheduler has various means to accept build requests. All of them enforce more security than the usual buildmaster ports do. Any source code being built can be used to compromise the buildslave accounts, but in general that code must be checked out from the VC repository first, so only people with commit privileges can get control of the buildslaves. The usual force-build control channels can waste buildslave time but do not allow arbitrary commands to be executed by people who don't have those commit privileges. However, the source code patch that is provided with the trial build does not have to go through the VC system first, so it is important to make sure these builds cannot be abused by a non-committer to acquire as much control over the buildslaves as a committer has. Ideally, only developers who have commit access to the VC repository would be able to start trial builds, but unfortunately the buildmaster does not, in general, have access to VC system's user list.
As a result, the try scheduler requires a bit more configuration. There are currently two ways to set this up:
buildbot try
command creates
a special file containing the source stamp information and drops it in
the jobdir, just like a standard maildir. When the buildmaster notices
the new file, it unpacks the information inside and starts the builds.
The config file entries used by 'buildbot try' either specify a local queuedir (for which write and mv are used) or a remote one (using scp and ssh).
The advantage of this scheme is that it is quite secure, the disadvantage is that it requires fiddling outside the buildmaster config (to set the permissions on the jobdir correctly). If the buildmaster machine happens to also house the VC repository, then it can be fairly easy to keep the VC userlist in sync with the trial-build userlist. If they are on different machines, this will be much more of a hassle. It may also involve granting developer accounts on a machine that would not otherwise require them.
To implement this, the buildslave invokes 'ssh -l username host
buildbot tryserver ARGS', passing the patch contents over stdin. The
arguments must include the inlet directory and the revision
information.
buildbot try
, their machine connects to the
buildmaster via PB and authenticates themselves using that username
and password, then sends a PB command to start the trial build.
The advantage of this scheme is that the entire configuration is performed inside the buildmaster's config file. The disadvantages are that it is less secure (while the “cred” authentication system does not expose the password in plaintext over the wire, it does not offer most of the other security properties that SSH does). In addition, the buildmaster admin is responsible for maintaining the username/password list, adding and deleting entries as developers come and go.
For example, to set up the “jobdir” style of trial build, using a
command queue directory of MASTERDIR/jobdir (and assuming that
all your project developers were members of the developers
unix
group), you would first set up that directory:
mkdir -p MASTERDIR/jobdir MASTERDIR/jobdir/new MASTERDIR/jobdir/cur MASTERDIR/jobdir/tmp chgrp developers MASTERDIR/jobdir MASTERDIR/jobdir/* chmod g+rwx,o-rwx MASTERDIR/jobdir MASTERDIR/jobdir/*
and then use the following scheduler in the buildmaster's config file:
from buildbot.schedulers.trysched import Try_Jobdir s = Try_Jobdir(name="try1", builderNames=["full-linux", "full-netbsd", "full-OSX"], jobdir="jobdir") c['schedulers'] = [s]
Note that you must create the jobdir before telling the buildmaster to use this configuration, otherwise you will get an error. Also remember that the buildmaster must be able to read and write to the jobdir as well. Be sure to watch the twistd.log file (see Logfiles) as you start using the jobdir, to make sure the buildmaster is happy with it.
To use the username/password form of authentication, create a
Try_Userpass
instance instead. It takes the same
builderNames
argument as the Try_Jobdir
form, but
accepts an addtional port
argument (to specify the TCP port to
listen on) and a userpass
list of username/password pairs to
accept. Remember to use good passwords for this: the security of the
buildslave accounts depends upon it:
from buildbot.schedulers.trysched import Try_Userpass s = Try_Userpass(name="try2", builderNames=["full-linux", "full-netbsd", "full-OSX"], port=8031, userpass=[("alice","pw1"), ("bob", "pw2")] ) c['schedulers'] = [s]
Like most places in the buildbot, the port
argument takes a
strports specification. See twisted.application.strports
for
details.
The Triggerable
scheduler waits to be triggered by a Trigger
step (see Triggering Schedulers) in another build. That step
can optionally wait for the scheduler's builds to complete. This
provides two advantages over Dependent schedulers. First, the same
scheduler can be triggered from multiple builds. Second, the ability
to wait for a Triggerable's builds to complete provides a form of
"subroutine call", where one or more builds can "call" a scheduler
to perform some work for them, perhaps on other buildslaves.
The parameters are just the basics:
name
builderNames
properties
This class is only useful in conjunction with the Trigger
step.
Here is a fully-worked example:
from buildbot.schedulers import basic, timed, triggerable from buildbot.process import factory from buildbot.steps import trigger checkin = basic.Scheduler(name="checkin", branch=None, treeStableTimer=5*60, builderNames=["checkin"]) nightly = timed.Nightly(name='nightly', builderNames=['nightly'], hour=3, minute=0) mktarball = triggerable.Triggerable(name="mktarball", builderNames=["mktarball"]) build = triggerable.Triggerable(name="build-all-platforms", builderNames=["build-all-platforms"]) test = triggerable.Triggerable(name="distributed-test", builderNames=["distributed-test"]) package = triggerable.Triggerable(name="package-all-platforms", builderNames=["package-all-platforms"]) c['schedulers'] = [mktarball, checkin, nightly, build, test, package] # on checkin, make a tarball, build it, and test it checkin_factory = factory.BuildFactory() checkin_factory.addStep(trigger.Trigger(schedulerNames=['mktarball'], waitForFinish=True)) checkin_factory.addStep(trigger.Trigger(schedulerNames=['build-all-platforms'], waitForFinish=True)) checkin_factory.addStep(trigger.Trigger(schedulerNames=['distributed-test'], waitForFinish=True)) # and every night, make a tarball, build it, and package it nightly_factory = factory.BuildFactory() nightly_factory.addStep(trigger.Trigger(schedulerNames=['mktarball'], waitForFinish=True)) nightly_factory.addStep(trigger.Trigger(schedulerNames=['build-all-platforms'], waitForFinish=True)) nightly_factory.addStep(trigger.Trigger(schedulerNames=['package-all-platforms'], waitForFinish=True))
buildbot.schedulers.basic.AnyBranchScheduler
: AnyBranchSchedulerbuildbot.schedulers.basic.Dependent
: Dependent Schedulerbuildbot.schedulers.basic.Scheduler
: Scheduler Schedulerbuildbot.schedulers.filter.ChangeFilter
: Change Filtersbuildbot.schedulers.timed.Nightly
: Nightly Schedulerbuildbot.schedulers.timed.Periodic
: Periodic Schedulerbuildbot.schedulers.triggerable.Triggerable
: Triggerable Schedulerbuildbot.schedulers.trysched.Try_Jobdir
: Try Schedulersbuildbot.schedulers.trysched.Try_Userpass
: Try Schedulers
The c['slaves']
key is a list of known buildslaves. In the common case,
each buildslave is defined by an instance of the BuildSlave class. It
represents a standard, manually started machine that will try to connect to
the buildbot master as a slave. Contrast these with the "on-demand" latent
buildslaves, such as the Amazon Web Service Elastic Compute Cloud latent
buildslave discussed below.
The BuildSlave class is instantiated with two values: (slavename, slavepassword). These are the same two values that need to be provided to the buildslave administrator when they create the buildslave.
The slavenames must be unique, of course. The password exists to prevent evildoers from interfering with the buildbot by inserting their own (broken) buildslaves into the system and thus displacing the real ones.
Buildslaves with an unrecognized slavename or a non-matching password will be rejected when they attempt to connect, and a message describing the problem will be put in the log file (see Logfiles).
from buildbot.buildslave import BuildSlave c['slaves'] = [BuildSlave('bot-solaris', 'solarispasswd'), BuildSlave('bot-bsd', 'bsdpasswd'), ]
BuildSlave
objects can also be created with an optional
properties
argument, a dictionary specifying properties that
will be available to any builds performed on this slave. For example:
from buildbot.buildslave import BuildSlave c['slaves'] = [BuildSlave('bot-solaris', 'solarispasswd', properties={'os':'solaris'}), ]
The BuildSlave
constructor can also take an optional
max_builds
parameter to limit the number of builds that it
will execute simultaneously:
from buildbot.buildslave import BuildSlave c['slaves'] = [BuildSlave("bot-linux", "linuxpassword", max_builds=2)]
Sometimes, the buildslaves go away. One very common reason for this is when the buildslave process is started once (manually) and left running, but then later the machine reboots and the process is not automatically restarted.
If you'd like to have the administrator of the buildslave (or other
people) be notified by email when the buildslave has been missing for
too long, just add the notify_on_missing=
argument to the
BuildSlave
definition:
c['slaves'] = [BuildSlave('bot-solaris', 'solarispasswd', notify_on_missing="bob@example.com"), ]
By default, this will send email when the buildslave has been
disconnected for more than one hour. Only one email per
connection-loss event will be sent. To change the timeout, use
missing_timeout=
and give it a number of seconds (the default
is 3600).
You can have the buildmaster send email to multiple recipients: just provide a list of addresses instead of a single one:
c['slaves'] = [BuildSlave('bot-solaris', 'solarispasswd', notify_on_missing=["bob@example.com", "alice@example.org"], missing_timeout=300, # notify after 5 minutes ), ]
The email sent this way will use a MailNotifier (see MailNotifier) status target, if one is configured. This provides a way for you to control the “from” address of the email, as well as the relayhost (aka “smarthost”) to use as an SMTP server. If no MailNotifier is configured on this buildmaster, the buildslave-missing emails will be sent using a default configuration.
Note that if you want to have a MailNotifier for buildslave-missing emails but not for regular build emails, just create one with builders=[], as follows:
from buildbot.status import mail m = mail.MailNotifier(fromaddr="buildbot@localhost", builders=[], relayhost="smtp.example.org") c['status'].append(m) c['slaves'] = [BuildSlave('bot-solaris', 'solarispasswd', notify_on_missing="bob@example.com"), ]
The standard buildbot model has slaves started manually. The previous section described how to configure the master for this approach.
Another approach is to let the buildbot master start slaves when builds are ready, on-demand. Thanks to services such as Amazon Web Services' Elastic Compute Cloud ("AWS EC2"), this is relatively easy to set up, and can be very useful for some situations.
The buildslaves that are started on-demand are called "latent" buildslaves. As of this writing, buildbot ships with an abstract base class for building latent buildslaves, and a concrete implementation for AWS EC2.
AWS EC2 is a web service that allows you to start virtual machines in an Amazon data center. Please see their website for details, incuding costs. Using the AWS EC2 latent buildslaves involves getting an EC2 account with AWS and setting up payment; customizing one or more EC2 machine images ("AMIs") on your desired operating system(s) and publishing them (privately if needed); and configuring the buildbot master to know how to start your customized images for "substantiating" your latent slaves.
To start off, to use the AWS EC2 latent buildslave, you need to get an AWS developer account and sign up for EC2. These instructions may help you get started:
Now you need to create an AMI and configure the master. You may need to run through this cycle a few times to get it working, but these instructions should get you started.
Creating an AMI is out of the scope of this document. The EC2 Getting Started Guide is a good resource for this task. Here are a few additional hints.
Now let's assume you have an AMI that should work with the EC2LatentBuildSlave. It's now time to set up your buildbot master configuration.
You will need some information from your AWS account: the "Access Key Id" and the "Secret Access Key". If you've built the AMI yourself, you probably already are familiar with these values. If you have not, and someone has given you access to an AMI, these hints may help you find the necessary values:
When creating an EC2LatentBuildSlave in the buildbot master configuration, the first three arguments are required. The name and password are the first two arguments, and work the same as with normal buildslaves. The next argument specifies the type of the EC2 virtual machine (available options as of this writing include "m1.small", "m1.large", 'm1.xlarge", "c1.medium", and "c1.xlarge"; see the EC2 documentation for descriptions of these machines).
Here is the simplest example of configuring an EC2 latent buildslave. It specifies all necessary remaining values explicitly in the instantiation.
from buildbot.ec2buildslave import EC2LatentBuildSlave c['slaves'] = [EC2LatentBuildSlave('bot1', 'sekrit', 'm1.large', ami='ami-12345', identifier='publickey', secret_identifier='privatekey' )]
The "ami" argument specifies the AMI that the master should start. The "identifier" argument specifies the AWS "Access Key Id," and the "secret_identifier" specifies the AWS "Secret Access Key." Both the AMI and the account information can be specified in alternate ways.
Note that whoever has your identifier and secret_identifier values can request AWS work charged to your account, so these values need to be carefully protected. Another way to specify these access keys is to put them in a separate file. You can then make the access privileges stricter for this separate file, and potentially let more people read your main configuration file.
By default, you can make an .ec2 directory in the home folder of the user running the buildbot master. In that directory, create a file called aws_id. The first line of that file should be your access key id; the second line should be your secret access key id. Then you can instantiate the build slave as follows.
from buildbot.ec2buildslave import EC2LatentBuildSlave c['slaves'] = [EC2LatentBuildSlave('bot1', 'sekrit', 'm1.large', ami='ami-12345')]
If you want to put the key information in another file, use the "aws_id_file_path" initialization argument.
Previous examples used a particular AMI. If the Buildbot master will be deployed in a process-controlled environment, it may be convenient to specify the AMI more flexibly. Rather than specifying an individual AMI, specify one or two AMI filters.
In all cases, the AMI that sorts last by its location (the S3 bucket and manifest name) will be preferred.
One available filter is to specify the acceptable AMI owners, by AWS account number (the 12 digit number, usually rendered in AWS with hyphens like "1234-5678-9012", should be entered as in integer).
from buildbot.ec2buildslave import EC2LatentBuildSlave bot1 = EC2LatentBuildSlave('bot1', 'sekrit', 'm1.large', valid_ami_owners=[11111111111, 22222222222], identifier='publickey', secret_identifier='privatekey' )
The other available filter is to provide a regular expression string that will be matched against each AMI's location (the S3 bucket and manifest name).
from buildbot.ec2buildslave import EC2LatentBuildSlave bot1 = EC2LatentBuildSlave( 'bot1', 'sekrit', 'm1.large', valid_ami_location_regex=r'buildbot\-.*/image.manifest.xml', identifier='publickey', secret_identifier='privatekey')
The regular expression can specify a group, which will be preferred for the sorting. Only the first group is used; subsequent groups are ignored.
from buildbot.ec2buildslave import EC2LatentBuildSlave bot1 = EC2LatentBuildSlave( 'bot1', 'sekrit', 'm1.large', valid_ami_location_regex=r'buildbot\-.*\-(.*)/image.manifest.xml', identifier='publickey', secret_identifier='privatekey')
If the group can be cast to an integer, it will be. This allows 10 to sort after 1, for instance.
from buildbot.ec2buildslave import EC2LatentBuildSlave bot1 = EC2LatentBuildSlave( 'bot1', 'sekrit', 'm1.large', valid_ami_location_regex=r'buildbot\-.*\-(\d+)/image.manifest.xml', identifier='publickey', secret_identifier='privatekey')
In addition to using the password as a handshake between the master and the slave, you may want to use a firewall to assert that only machines from a specific IP can connect as slaves. This is possible with AWS EC2 by using the Elastic IP feature. To configure, generate a Elastic IP in AWS, and then specify it in your configuration using the "elastic_ip" argument.
from buildbot.ec2buildslave import EC2LatentBuildSlave c['slaves'] = [EC2LatentBuildSlave('bot1', 'sekrit', 'm1.large', 'ami-12345', identifier='publickey', secret_identifier='privatekey', elastic_ip='208.77.188.166' )]
The EC2LatentBuildSlave supports all other configuration from the standard BuildSlave. The "missing_timeout" and "notify_on_missing" specify how long to wait for an EC2 instance to attach before considering the attempt to have failed, and email addresses to alert, respectively. "missing_timeout" defaults to 20 minutes.
The "build_wait_timeout" allows you to specify how long an EC2LatentBuildSlave should wait after a build for another build before it shuts down the EC2 instance. It defaults to 10 minutes.
"keypair_name" and "security_name" allow you to specify different names for these AWS EC2 values. They both default to "latent_buildbot_slave".
libvirt is a virtualization API for interacting with the virtualization capabilities of recent versions of Linux and other OSes. It is LGPL and comes with a stable C API, and python bindings.
This means we know have an API which when tied to buildbot allows us to have slaves that run under Xen, QEMU, KVM, LXC, OpenVZ, User Mode Linux, VirtualBox and VMWare.
The libvirt code in Buildbot was developed against libvirt 0.7.5 on Ubuntu Lucid. It is used with KVM to test python code on Karmic VM's, but obviously isn't limited to that. Each build is run on a new VM, images are temporary and thrown away after each build.
We won't show you how to set up libvirt as it is quite different on each platform, but there are a few things you should keep in mind.
You need to create a base image for your builds that has everything needed to build your software. You need to configure the base image with a buildbot slave that is configured to connect to the master on boot.
Because this image may need updating a lot, we strongly suggest scripting its creation.
If you want to have multiple slaves using the same base image it can be annoying to duplicate the image just to change the buildbot credentials. One option is to use libvirt's DHCP server to allocate an identity to the slave: DHCP sets a hostname, and the slave takes its identity from that.
Doing all this is really beyond the scope of the manual, but there is a
vmbuilder script and a network.xml file to create such a DHCP server in
contrib/
(see Contrib Scripts) that should get you started:
sudo apt-get install ubuntu-vm-builder sudo contrib/libvirt/vmbuilder
Should create an ubuntu/ folder with a suitable image in it.
virsh net-define contrib/libvirt/network.xml virsh net-start buildbot-network
Should set up a KVM compatible libvirt network for your buildbot VM's to run on.
If you want to add a simple on demand VM to your setup, you only need the following. We set the username to minion1, the password to sekrit. The base image is called base_image and a copy of it will be made for the duration of the VM's life. That copy will be thrown away every time a build is complete.
from buildbot.libvirtbuildslave import LibVirtBuildSlave c['slaves'] = [LibVirtBuildSlave('minion1', 'sekrit', '/home/buildbot/images/minion1', '/home/buildbot/images/base_image')]
You can use virt-manager to define 'minion1' with the correct hardware. If you don't, buildbot won't be able to find a VM to start.
LibVirtBuildSlave
accepts the following arguments:
name
password
hd_image
base_image
xml
virsh define
. The VM will be created
automatically when needed, and destroyed when not needed any longer.
Any latent build slave that interacts with a for-fee service, such as the EC2LatentBuildSlave, brings significant risks. As already identified, the configuraton will need access to account information that, if obtained by a criminal, can be used to charge services to your account. Also, bugs in the buildbot software may lead to unnecessary charges. In particular, if the master neglects to shut down an instance for some reason, a virtual machine may be running unnecessarily, charging against your account. Manual and/or automatic (e.g. nagios with a plugin using a library like boto) double-checking may be appropriate.
A comparitively trivial note is that currently if two instances try to attach to the same latent buildslave, it is likely that the system will become confused. This should not occur, unless, for instance, you configure a normal build slave to connect with the authentication of a latent buildbot. If the situation occurs, stop all attached instances and restart the master.
Writing a new latent buildslave should only require subclassing
buildbot.buildslave.AbstractLatentBuildSlave
and implementing
start_instance and stop_instance.
def start_instance(self): # responsible for starting instance that will try to connect with this # master. Should return deferred. Problems should use an errback. The # callback value can be None, or can be an iterable of short strings to # include in the "substantiate success" status message, such as # identifying the instance that started. raise NotImplementedError def stop_instance(self, fast=False): # responsible for shutting down instance. Return a deferred. If `fast`, # we're trying to shut the master down, so callback as soon as is safe. # Callback value is ignored. raise NotImplementedError
See buildbot.ec2buildslave.EC2LatentBuildSlave
for an example, or see the
test example buildbot.test_slaves.FakeLatentBuildSlave
.
The c['builders']
key is a list of objects giving configuration for the
Builders. For more information, See Builder. The class definition for the
builder configuration is in buildbot.config
. In the configuration file,
its use looks like:
from buildbot.config import BuilderConfig c['builders'] = [ BuilderConfig(name='quick', slavenames=['bot1', 'bot2'], factory=f_quick), BuilderConfig(name='thorough', slavename='bot1', factory=f_thorough), ]
The constructor takes the following keyword arguments:
name
slavename
slavenames
c['slaves']
list. Each
buildslave can accomodate multiple Builders. The slavenames
parameter
can be a list of names, while slavename
can specify only one slave.
factory
buildbot.process.factory.BuildFactory
instance which
controls how the build is performed. Full details appear in their own
section, See Build Factories. Parameters like the location of the CVS
repository and the compile-time options used for the build are
generally provided as arguments to the factory's constructor.
Other optional keys may be set on each Builder:
builddir
name
with some characters escaped. Each builder must have a unique build
directory.
slavebuilddir
builddir
. If a slave is connected to multiple builders
that share the same slavebuilddir
, make sure the slave is set to
run one build at a time or ensure this is fine to run multiple builds from
the same directory simultaneously.
category
nextSlave
Builder
object which is assigning a new job, and a list of BuildSlave
objects. The function should return one of the BuildSlave
objects, or None
if none of the available slaves should be
used.
nextBuild
Builder
object which is assigning a new job, and a list of BuildRequest
objects of pending builds. The function should return one of the
BuildRequest
objects, or None
if none of the pending
builds should be started.
locks
env
ShellCommand
will override
variables of the same name passed to the Builder.
For example, if you have a pool of identical slaves, it is often easier to manage variables like PATH from Buildbot, rather than manually editing it inside of the slaves' environment.
f = factory.BuildFactory f.addStep(ShellCommand( command=['bash', './configure'])) f.addStep(Compile()) c['builders'] = [ BuilderConfig(name='test', factory=f, slavenames=['slave1', 'slave2', 'slave3', 'slave4'], env={'PATH': '/opt/local/bin:/opt/app/bin:/usr/local/bin:/usr/bin'}), ]
mergeRequests
BuildRequest
s into a single build. If False,
the Builder will never attempt to merge requests.
Merging requests helps to reduce the total number of builds, but loses information about which exact change might have caused a build problem. Requests can only be merged for compatible SourceStamps, for example two Changes that occur on the same branch, or two requests to build 'HEAD' (i.e. the latest checkin) on the same branch.
The buildmaster's c['mergeRequests']
hook function is evaluated
only if the Builder's mergeRequests
key is True, so merging
only takes place if both allow it. see Merging BuildRequests.
properties
The BuilderConfig
parameter nextBuild
can be use to prioritize
build requests within a builder. Note that this is orthogonal to
see Prioritizing Builders, which controls the order in which builders are
called on to start their builds.
def nextBuild(bldr, requests): for r in requests: if r.source.branch == 'release': return r return requests[0] c['builders'] = [ BuilderConfig(name='test', factory=f, nextBuild=nextBuild, slavenames=['slave1', 'slave2', 'slave3', 'slave4']), ]
Each Builder is equipped with a “build factory”, which is
responsible for producing the actual Build
objects that perform
each build. This factory is created in the configuration file, and
attached to a Builder through the factory
element of its
dictionary.
The standard BuildFactory
object creates Build
objects
by default. These Builds will each execute a collection of BuildSteps
in a fixed sequence. Each step can affect the results of the build,
but in general there is little intelligence to tie the different steps
together.
The steps used by these builds are all subclasses of BuildStep
.
The standard ones provided with Buildbot are documented later,
See Build Steps. You can also write your own subclasses to use in
builds.
The basic behavior for a BuildStep
is to:
A BuildFactory defines the steps that every build will follow. Think of it as
a glorified script. For example, a build which consists of a CVS checkout
followed by a make build
would be constructed as follows:
from buildbot.steps import source, shell from buildbot.process import factory f = factory.BuildFactory() f.addStep(source.CVS(cvsroot=CVSROOT, cvsmodule="project", mode="update")) f.addStep(shell.Compile(command=["make", "build"]))
It is also possible to pass a list of steps into the
BuildFactory
when it is created. Using addStep
is
usually simpler, but there are cases where is is more convenient to
create the list of steps ahead of time, perhaps using some Python
tricks to generate the steps.
from buildbot.steps import source, shell from buildbot.process import factory all_steps = [ source.CVS(cvsroot=CVSROOT, cvsmodule="project", mode="update"), shell.Compile(command=["make", "build"]), ] f = factory.BuildFactory(all_steps)
Finally, you can also add a sequence of steps all at once:
f.addSteps(all_steps)
useProgress
workdir
# # pre-repository working directory # def workdir (source_stamp): return hashlib.md5 (source_stamp.repository).hexdigest ()[:8] build = factory.BuildFactory() build.workdir = workdir build.addStep (Git (mode = "update")) ... builders.append ({'name': 'mybuilder', 'slavename': 'myslave', 'builddir': 'mybuilder', 'factory': build}) # You'll end up with workdirs like: # # Repo1 => <buildslave-base>/mybuilder/a78890ba # Repo2 => <buildslave-base>/mybuilder/0823ba88 # ...
You could make the workdir() function compute other paths, based on parts of the repo URL in the sourcestamp, or lookup in a lookup table based on repo URL. As long as there is a permanent 1:1 mapping between repos and workdir this will work.
The default BuildFactory
, provided in the
buildbot.process.factory
module, contains an internal list of
“BuildStep specifications”: a list of (step_class, kwargs)
tuples for each. These specification tuples are constructed when the
config file is read, by asking the instances passed to addStep
for their subclass and arguments.
To support config files from buildbot-0.7.5 and earlier,
addStep
also accepts the f.addStep(shell.Compile,
command=["make","build"])
form, although its use is discouraged
because then the Compile
step doesn't get to validate or
complain about its arguments until build time. The modern
pass-by-instance approach allows this validation to occur while the
config file is being loaded, where the admin has a better chance of
noticing problems.
When asked to create a Build, the BuildFactory
puts a copy of
the list of step specifications into the new Build object. When the
Build is actually started, these step specifications are used to
create the actual set of BuildSteps, which are then executed one at a
time. This serves to give each Build an independent copy of each step.
Each step can affect the build process in the following ways:
haltOnFailure
attribute is True, then a failure
in the step (i.e. if it completes with a result of FAILURE) will cause
the whole build to be terminated immediately: no further steps will be
executed, with the exception of steps with alwaysRun
set to
True. haltOnFailure
is useful for setup steps upon which the
rest of the build depends: if the CVS checkout or ./configure
process fails, there is no point in trying to compile or test the
resulting tree.
alwaysRun
attribute is True, then it will always
be run, regardless of if previous steps have failed. This is useful
for cleanup steps that should always be run to return the build
directory or build slave into a good state.
flunkOnFailure
or flunkOnWarnings
flag is set,
then a result of FAILURE or WARNINGS will mark the build as a whole as
FAILED. However, the remaining steps will still be executed. This is
appropriate for things like multiple testing steps: a failure in any
one of them will indicate that the build has failed, however it is
still useful to run them all to completion.
warnOnFailure
or warnOnWarnings
flag
is set, then a result of FAILURE or WARNINGS will mark the build as
having WARNINGS, and the remaining steps will still be executed. This
may be appropriate for certain kinds of optional build or test steps.
For example, a failure experienced while building documentation files
should be made visible with a WARNINGS result but not be serious
enough to warrant marking the whole build with a FAILURE.
In addition, each Step produces its own results, may create logfiles, etc. However only the flags described above have any effect on the build as a whole.
The pre-defined BuildSteps like CVS
and Compile
have
reasonably appropriate flags set on them already. For example, without
a source tree there is no point in continuing the build, so the
CVS
class has the haltOnFailure
flag set to True. Look
in buildbot/steps/*.py to see how the other Steps are
marked.
Each Step is created with an additional workdir
argument that
indicates where its actions should take place. This is specified as a
subdirectory of the slave builder's base directory, with a default
value of build
. This is only implemented as a step argument (as
opposed to simply being a part of the base directory) because the
CVS/SVN steps need to perform their checkouts from the parent
directory.
GNU Autoconf is a software portability tool, intended to make it possible to write programs in C (and other languages) which will run on a variety of UNIX-like systems. Most GNU software is built using autoconf. It is frequently used in combination with GNU automake. These tools both encourage a build process which usually looks like this:
% CONFIG_ENV=foo ./configure --with-flags % make all % make check # make install
(except of course the Buildbot always skips the make install
part).
The Buildbot's buildbot.process.factory.GNUAutoconf
factory is
designed to build projects which use GNU autoconf and/or automake. The
configuration environment variables, the configure flags, and command
lines used for the compile and test are all configurable, in general
the default values will be suitable.
Example:
f = factory.GNUAutoconf(source=source.SVN(svnurl=URL, mode="copy"), flags=["--disable-nls"])
Required Arguments:
source
Optional Arguments:
configure
./configure
. Accepts either a string or a list of shell argv
elements.
configureEnv
CFLAGS="-O2 -g"
(to turn off debug symbols during the compile).
Defaults to an empty dictionary.
configureFlags
["--without-x"]
to
disable windowing support. Defaults to an empty list.
compile
make all
. If set to
None, the compile step is skipped.
test
make check
. If set to
None, the test step is skipped.
This is a subclass of GNUAutoconf
which assumes the source is in CVS,
and uses mode='clobber'
to always build from a clean working copy.
This class is similar to BasicBuildFactory, but uses SVN instead of CVS.
The QuickBuildFactory
class is a subclass of GNUAutoconf
which
assumes the source is in CVS, and uses mode='update'
to get incremental
updates.
The difference between a “full build” and a “quick build” is that
quick builds are generally done incrementally, starting with the tree
where the previous build was performed. That simply means that the
source-checkout step should be given a mode='update'
flag, to
do the source update in-place.
In addition to that, this class sets the useProgress
flag to False.
Incremental builds will (or at least the ought to) compile as few files as
necessary, so they will take an unpredictable amount of time to run. Therefore
it would be misleading to claim to predict how long the build will take.
This class is probably not of use to new projects.
Most Perl modules available from the CPAN
archive use the MakeMaker
module to provide configuration,
build, and test services. The standard build routine for these modules
looks like:
% perl Makefile.PL % make % make test # make install
(except again Buildbot skips the install step)
Buildbot provides a CPAN
factory to compile and test these
projects.
Arguments:
source
perl
perl
executable to use. Defaults
to just perl
.
Most Python modules use the distutils
package to provide
configuration and build services. The standard build process looks
like:
% python ./setup.py build % python ./setup.py install
Unfortunately, although Python provides a standard unit-test framework
named unittest
, to the best of my knowledge distutils
does not provide a standardized target to run such unit tests. (Please
let me know if I'm wrong, and I will update this factory.)
The Distutils
factory provides support for running the build
part of this process. It accepts the same source=
parameter as
the other build factories.
Arguments:
source
python
python
executable to use. Defaults
to just python
.
test
Twisted provides a unit test tool named trial
which provides a
few improvements over Python's built-in unittest
module. Many
python projects which use Twisted for their networking or application
services also use trial for their unit tests. These modules are
usually built and tested with something like the following:
% python ./setup.py build % PYTHONPATH=build/lib.linux-i686-2.3 trial -v PROJECTNAME.test % python ./setup.py install
Unfortunately, the build/lib directory into which the
built/copied .py files are placed is actually architecture-dependent,
and I do not yet know of a simple way to calculate its value. For many
projects it is sufficient to import their libraries “in place” from
the tree's base directory (PYTHONPATH=.
).
In addition, the PROJECTNAME value where the test files are
located is project-dependent: it is usually just the project's
top-level library directory, as common practice suggests the unit test
files are put in the test
sub-module. This value cannot be
guessed, the Trial
class must be told where to find the test
files.
The Trial
class provides support for building and testing
projects which use distutils and trial. If the test module name is
specified, trial will be invoked. The library path used for testing
can also be set.
One advantage of trial is that the Buildbot happens to know how to parse trial output, letting it identify which tests passed and which ones failed. The Buildbot can then provide fine-grained reports about how many tests have failed, when individual tests fail when they had been passing previously, etc.
Another feature of trial is that you can give it a series of source
.py files, and it will search them for special test-case-name
tags that indicate which test cases provide coverage for that file.
Trial can then run just the appropriate tests. This is useful for
quick builds, where you want to only run the test cases that cover the
changed functionality.
Arguments:
testpath
PYTHONPATH
when running the unit
tests, if tests are being run. Defaults to .
to include the
project files in-place. The generated build library is frequently
architecture-dependent, but may simply be build/lib for
pure-python modules.
python
trial
to an explicit path (like
/usr/bin/trial
or ./bin/trial
). The parameter defaults to None
, which
leaves it out entirely (running trial args
instead of
python ./bin/trial args
). Likely values are ['python']
,
['python2.2']
, or ['python', '-Wall']
.
trial
trial
command. It is occasionally
useful to use an alternate executable, such as trial2.2
which
might run the tests under an older version of Python. Defaults to
trial
.
trialMode
['--reporter=bwverbose']
, which only works for
Twisted-2.1.0 and later.
trialArgs
[]
.
tests
PROJECTNAME.test
, or a
list of strings. Defaults to None, indicating that no tests should be
run. You must either set this or testChanges
.
testChanges
tests
parameter and instead ask the Build for all
the files that make up the Changes going into this build. Pass these filenames
to trial and ask it to look for test-case-name tags, running just the tests
necessary to cover the changes.
recurse
True
, tells Trial (with the --recurse
argument) to look in all
subdirectories for additional test cases.
reactor
randomly
True
, tells Trial (with the --random=0
argument) to
run the test cases in random order, which sometimes catches subtle
inter-test dependency bugs. Defaults to False
.
The step can also take any of the ShellCommand
arguments, e.g.,
haltOnFailure
.
Unless one of tests
or testChanges
are set, the step will
generate an exception.
buildbot.process.factory.BasicBuildFactory
: BasicBuildFactorybuildbot.process.factory.BasicSVN
: BasicSVNbuildbot.process.factory.BuildFactory
: BuildFactorybuildbot.process.factory.CPAN
: CPANbuildbot.process.factory.Distutils
: Distutilsbuildbot.process.factory.GNUAutoconf
: GNUAutoconfbuildbot.process.factory.QuickBuildFactory
: QuickBuildFactorybuildbot.process.factory.Trial
: Trial (Factory)BuildStep
s are usually specified in the buildmaster's
configuration file, in a list that goes into the BuildFactory
.
The BuildStep
instances in this list are used as templates to
construct new independent copies for each build (so that state can be
kept on the BuildStep
in one build without affecting a later
build). Each BuildFactory
can be created with a list of steps,
or the factory can be created empty and then steps added to it using
the addStep
method:
from buildbot.steps import source, shell from buildbot.process import factory f = factory.BuildFactory() f.addStep(source.SVN(svnurl="http://svn.example.org/Trunk/")) f.addStep(shell.ShellCommand(command=["make", "all"])) f.addStep(shell.ShellCommand(command=["make", "test"]))
The rest of this section lists all the standard BuildStep objects available for use in a Build, and the parameters which can be used to control each.
All BuildSteps accept some common parameters. Some of these control
how their individual status affects the overall build. Others are used
to specify which Locks
(see see Interlocks) should be
acquired before allowing the step to run.
Arguments common to all BuildStep
subclasses:
name
haltOnFailure
alwaysRun=True
are still run. Generally
speaking, haltOnFailure implies flunkOnFailure (the default for most
BuildSteps). In some cases, particularly series of tests, it makes sense
to haltOnFailure if something fails early on but not flunkOnFailure.
This can be achieved with haltOnFailure=True, flunkOnFailure=False.
flunkOnWarnings
flunkOnFailure
warnOnWarnings
warnOnFailure
alwaysRun
haltOnFailure=True
has failed.
doStepIf
doStepIf
to a boolean value, or to a function that returns a
boolean value. If the value or function result is false, then the step will
return SKIPPED without doing anything. Oherwise, the step will be executed
normally. If you set doStepIf
to a function, that function should
accept one parameter, which will be the Step
object itself.
locks
buildbot.locks.SlaveLock
or
buildbot.locks.MasterLock
) that should be acquired before starting this
Step. The Locks will be released when the step is complete. Note that this is a
list of actual Lock instances, not names. Also note that all Locks must have
unique names. See Interlocks.
Build properties are a generalized way to provide configuration information to build steps; see Build Properties.
Some build properties are inherited from external sources – global
properties, schedulers, or buildslaves. Some build properties are
set when the build starts, such as the SourceStamp information. Other
properties can be set by BuildSteps as they run, for example the
various Source steps will set the got_revision
property to the
source revision that was actually checked out (which can be useful
when the SourceStamp in use merely requested the “latest revision”:
got_revision
will tell you what was actually built).
In custom BuildSteps, you can get and set the build properties with
the getProperty
/setProperty
methods. Each takes a string
for the name of the property, and returns or accepts an
arbitrary5 object. For example:
class MakeTarball(ShellCommand): def start(self): if self.getProperty("os") == "win": self.setCommand([ ... ]) # windows-only command else: self.setCommand([ ... ]) # equivalent for other systems ShellCommand.start(self)
You can use build properties in ShellCommands by using the
WithProperties
wrapper when setting the arguments of
the ShellCommand. This interpolates the named build properties
into the generated shell command. Most step parameters accept
WithProperties
. Please file bugs for any parameters which
do not.
from buildbot.steps.shell import ShellCommand from buildbot.process.properties import WithProperties f.addStep(ShellCommand( command=["tar", "czf", WithProperties("build-%s.tar.gz", "revision"), "source"]))
If this BuildStep were used in a tree obtained from Subversion, it would create a tarball with a name like build-1234.tar.gz.
The WithProperties
function does printf
-style string
interpolation, using strings obtained by calling
build.getProperty(propname)
. Note that for every %s
(or
%d
, etc), you must have exactly one additional argument to
indicate which build property you want to insert.
You can also use python dictionary-style string interpolation by using
the %(propname)s
syntax. In this form, the property name goes
in the parentheses, and WithProperties takes no additional
arguments:
f.addStep(ShellCommand( command=["tar", "czf", WithProperties("build-%(revision)s.tar.gz"), "source"]))
Don't forget the extra “s” after the closing parenthesis! This is the cause of many confusing errors.
The dictionary-style interpolation supports a number of more advanced syntaxes, too.
propname:-replacement
propname
exists, substitute its value; otherwise,
substitute replacement
. replacement
may be empty
(%(propname:-)s
)
propname:~replacement
propname:-replacement
, but only substitutes the value
of property propname
if it is something Python regards as "true".
Python considers None
, 0, empty lists, and the empty string to be
false, so such values will be replaced by replacement
.
propname:+replacement
propname
exists, substitute replacement
; otherwise,
substitute an empty string.
Although these are similar to shell substitutions, no other
substitutions are currently supported, and replacement
in the
above cannot contain more substitutions.
Note: like python, you can either do positional-argument interpolation
or keyword-argument interpolation, not both. Thus you cannot use
a string like WithProperties("foo-%(revision)s-%s", "branch")
.
The following build properties are set when the build is started, and are available to all steps.
branch
None
(which interpolates into
WithProperties
as an empty string) if the build is on the
default branch, which is generally the trunk. Otherwise it will be a
string like “branches/beta1.4”. The exact syntax depends upon the VC
system being used.
revision
If the “force build” button was pressed, the revision will be None
,
which means to use the most recent revision available. This is a “trunk
build”. This will be interpolated as an empty string.
got_revision
revision
, except for
trunk builds, where got_revision
indicates what revision was
current when the checkout was performed. This can be used to rebuild
the same source code later.
Note that for some VC systems (Darcs in particular), the revision is a
large string containing newlines, and is not suitable for interpolation
into a filename.
buildername
buildnumber
slavename
scheduler
repository
project
The first step of any build is typically to acquire the source code from which the build will be performed. There are several classes to handle this, one for each of the different source control system that Buildbot knows about. For a description of how Buildbot treats source control in general, see Version Control Systems.
All source checkout steps accept some common parameters to control how they get the sources and where they should be placed. The remaining per-VC-system parameters are mostly to specify where exactly the sources are coming from.
mode
update
.
update
copy
clobber
export
clobber
, except that the 'cvs export' command is
used to create the working directory. This command removes all CVS
metadata files (the CVS/ directories) from the tree, which is
sometimes useful for creating source tarballs (to avoid including the
metadata in the tar file).
workdir
alwaysUseLatest
retry
(delay, repeats)
which means
that when a full VC checkout fails, it should be retried up to
repeats times, waiting delay seconds between attempts. If
you don't provide this, it defaults to None
, which means VC
operations should not be retried. This is provided to make life easier
for buildslaves which are stuck behind poor network connections.
repository
repourl
as well as for baseURL
(when
applicable). Buildbot, now being aware of the repository name via the
ChangeSource step might in some cases not need the repository URL. There
are multiple way to pass it through to this step, corresponding to
the type of the parameter given to this step:
None
string
format string
%s
, then the repository
property from the Change will be substituted in place of the
%s
. This is usefull when the ChangeSource step knows where the
repository resides locally, but doesn't know the scheme used to access
it. For instance, ssh://server/%s
makes sense if the repository
property is the local path of the repository.
dict
callable
Use of WithProperites with string, dict and callable is supported.
Note that this is quite similar to the mechanism used by the WebStatus
for the changecommentlink
, projects
or repositories
parameter.
My habit as a developer is to do a cvs update
and make
each
morning. Problems can occur, either because of bad code being checked in, or
by incomplete dependencies causing a partial rebuild to fail where a
complete from-scratch build might succeed. A quick Builder which emulates
this incremental-build behavior would use the mode='update'
setting.
On the other hand, other kinds of dependency problems can cause a clean build to fail where a partial build might succeed. This frequently results from a link step that depends upon an object file that was removed from a later version of the tree: in the partial tree, the object file is still around (even though the Makefiles no longer know how to create it).
“official” builds (traceable builds performed from a known set of
source revisions) are always done as clean builds, to make sure it is
not influenced by any uncontrolled factors (like leftover files from a
previous build). A “full” Builder which behaves this way would want
to use the mode='clobber'
setting.
Each VC system has a corresponding source checkout class: their arguments are described on the following pages.
The CVS
build step performs a CVS checkout or update. It takes the following arguments:
cvsroot
:pserver:anonymous@cvs.sourceforge.net:/cvsroot/buildbot
cvsmodule
module
, which is generally a
subdirectory of the CVSROOT. The cvsmodule for the Buildbot source
code is buildbot
.
branch
-r
argument. This is most
useful for specifying a branch to work on. Defaults to HEAD
.
global_options
checkout_options
export_options
extra_options
checkout_options
is only used for checkout operations,
export_options
is only used for export operations, and
extra_options
is used for both.
checkoutDelay
-D
option. Defaults to
half of the parent Build's treeStableTimer.
The SVN
build step performs a
Subversion checkout or update.
There are two basic ways of setting up the checkout step, depending
upon whether you are using multiple branches or not.
The most versatile way to create the SVN
step is with the
svnurl
argument:
svnurl
URL
argument that will be given
to the svn checkout
command. It dictates both where the
repository is located and which sub-tree should be extracted. In this
respect, it is like a combination of the CVS cvsroot
and
cvsmodule
arguments. For example, if you are using a remote
Subversion repository which is accessible through HTTP at a URL of
http://svn.example.com/repos
, and you wanted to check out the
trunk/calc
sub-tree, you would use
svnurl="http://svn.example.com/repos/trunk/calc"
as an argument
to your SVN
step.
The svnurl
argument can be considered as a universal means to
create the SVN
step as it ignores the branch information in the
SourceStamp.
Alternatively, if you are building from multiple branches, then you
should preferentially create the SVN
step with the
baseURL
and defaultBranch
arguments instead:
baseURL
baseURL
can contain a %%BRANCH%%
placeholder, which will be replaced with the branch name. baseURL
should
probably end in a slash.
defaultBranch
baseURL
to create the URL that will be passed to the svn checkout
command. If you use baseURL
without specifying defaultBranch
every ChangeStamp
must come with a valid (not None) branch
.
It is possible to mix to have a mix of SVN
steps that use
either the svnurl
or baseURL
arguments but not both at
the same time.
username
svn
binary with a --username
option.
password
svn
binary with a --password
option. The password itself will be
suitably obfuscated in the logs.
extra_args
svn
binary.
keep_on_purge
ignore_ignores
always_purge
svn status
.
depth
If set to "empty" updates will not pull in any files or subdirectories not already present. If set to "files", updates will pull in any files not already present, but not directories. If set to "immediates", updates willl pull in any files or subdirectories not already present, the new subdirectories will have depth: empty. If set to "infinity", updates will pull in any files or subdirectories not already present; the new subdirectories will have depth-infinity. Infinity is equivalent to SVN default update behavior, without specifying any depth argument.
If you are using branches, you must also make sure your
ChangeSource
will report the correct branch names.
Let's suppose that the “MyProject” repository uses branches for the trunk, for various users' individual development efforts, and for several new features that will require some amount of work (involving multiple developers) before they are ready to merge onto the trunk. Such a repository might be organized as follows:
svn://svn.example.org/MyProject/trunk svn://svn.example.org/MyProject/branches/User1/foo svn://svn.example.org/MyProject/branches/User1/bar svn://svn.example.org/MyProject/branches/User2/baz svn://svn.example.org/MyProject/features/newthing svn://svn.example.org/MyProject/features/otherthing
Further assume that we want the Buildbot to run tests against the trunk and against all the feature branches (i.e., do a checkout/compile/build of branch X when a file has been changed on branch X, when X is in the set [trunk, features/newthing, features/otherthing]). We do not want the Buildbot to automatically build any of the user branches, but it should be willing to build a user branch when explicitly requested (most likely by the user who owns that branch).
There are three things that need to be set up to accommodate this
system. The first is a ChangeSource that is capable of identifying the
branch which owns any given file. This depends upon a user-supplied
function, in an external program that runs in the SVN commit hook and
connects to the buildmaster's PBChangeSource
over a TCP
connection. (you can use the “buildbot sendchange
” utility
for this purpose, but you will still need an external program to
decide what value should be passed to the --branch=
argument).
For example, a change to a file with the SVN URL of
“svn://svn.example.org/MyProject/features/newthing/src/foo.c” should
be broken down into a Change instance with
branch='features/newthing'
and file='src/foo.c'
.
The second piece is an AnyBranchScheduler
which will pay
attention to the desired branches. It will not pay attention to the
user branches, so it will not automatically start builds in response
to changes there. The AnyBranchScheduler class requires you to
explicitly list all the branches you want it to use, but it would not
be difficult to write a subclass which used
branch.startswith('features/'
to remove the need for this
explicit list. Or, if you want to build user branches too, you can use
AnyBranchScheduler with branches=None
to indicate that you want
it to pay attention to all branches.
The third piece is an SVN
checkout step that is configured to
handle the branches correctly, with a baseURL
value that
matches the way the ChangeSource splits each file's URL into base,
branch, and file.
from buildbot.changes.pb import PBChangeSource from buildbot.scheduler import AnyBranchScheduler from buildbot.process import source, factory from buildbot.steps import source, shell c['change_source'] = PBChangeSource() s1 = AnyBranchScheduler('main', ['trunk', 'features/newthing', 'features/otherthing'], 10*60, ['test-i386', 'test-ppc']) c['schedulers'] = [s1] f = factory.BuildFactory() f.addStep(source.SVN(mode='update', baseURL='svn://svn.example.org/MyProject/', defaultBranch='trunk')) f.addStep(shell.Compile(command="make all")) f.addStep(shell.Test(command="make test")) c['builders'] = [ {'name':'test-i386', 'slavename':'bot-i386', 'builddir':'test-i386', 'factory':f }, {'name':'test-ppc', 'slavename':'bot-ppc', 'builddir':'test-ppc', 'factory':f }, ]
In this example, when a change arrives with a branch
attribute of
“trunk”, the resulting build will have a SVN step that concatenates
“svn://svn.example.org/MyProject/” (the baseURL) with “trunk” (the branch
name) to get the correct svn command. If the “newthing” branch has a change
to “src/foo.c”, then the SVN step will concatenate
“svn://svn.example.org/MyProject/” with “features/newthing” to get the
svnurl for checkout.
For added flexibility, baseURL
may contain a %%BRANCH%%
placeholder, which will be replaced either by the branch in the SourceStamp or
the default specified in defaultBranch
.
source.SVN( mode='update', baseURL='svn://svn.example.org/svn/%%BRANCH%%/myproject', defaultBranch='trunk' )
The Darcs
build step performs a
Darcs checkout or update.
Like See SVN, this step can either be configured to always check
out a specific tree, or set up to pull from a particular branch that
gets specified separately for each build. Also like SVN, the
repository URL given to Darcs is created by concatenating a
baseURL
with the branch name, and if no particular branch is
requested, it uses a defaultBranch
. The only difference in
usage is that each potential Darcs repository URL must point to a
fully-fledged repository, whereas SVN URLs usually point to sub-trees
of the main Subversion repository. In other words, doing an SVN
checkout of baseURL
is legal, but silly, since you'd probably
wind up with a copy of every single branch in the whole repository.
Doing a Darcs checkout of baseURL
is just plain wrong, since
the parent directory of a collection of Darcs repositories is not
itself a valid repository.
The Darcs step takes the following arguments:
repourl
baseURL
is provided): the URL at which the
Darcs source repository is available.
baseURL
repourl
is provided): the base repository URL,
to which a branch name will be appended. It should probably end in a
slash.
defaultBranch
baseURL
is provided): this specifies
the name of the branch to use when a Build does not provide one of its
own. This will be appended to baseURL
to create the string that
will be passed to the darcs get
command.
The Mercurial
build step performs a
Mercurial (aka “hg”) checkout
or update.
Branches are available in two modes: ”dirname” like See Darcs, or ”inrepo”, which uses the repository internal branches. Make sure this setting matches your changehook, if you have that installed.
The Mercurial step takes the following arguments:
repourl
baseURL
is provided): the URL at which the
Mercurial source repository is available.
baseURL
repourl
is provided): the base repository URL,
to which a branch name will be appended. It should probably end in a
slash.
defaultBranch
baseURL
is provided): this specifies
the name of the branch to use when a Build does not provide one of its
own. This will be appended to baseURL
to create the string that
will be passed to the hg clone
command.
branchType
baseURL
or the branch is a mercurial named branch and can be
found within the repourl
.
clobberOnBranchChange
bzr
is a descendant of Arch/Baz, and is frequently referred to
as simply “Bazaar”. The repository-vs-workspace model is similar to
Darcs, but it uses a strictly linear sequence of revisions (one
history per branch) like Arch. Branches are put in subdirectories.
This makes it look very much like Mercurial. It takes the following
arguments:
repourl
baseURL
is provided): the URL at which the
Bzr source repository is available.
baseURL
repourl
is provided): the base repository URL,
to which a branch name will be appended. It should probably end in a
slash.
defaultBranch
baseURL
is provided): this specifies
the name of the branch to use when a Build does not provide one of its
own. This will be appended to baseURL
to create the string that
will be passed to the bzr checkout
command.
forceSharedRepo
The P4
build step creates a Perforce client specification and performs an update.
p4base
defaultBranch
p4port
p4user
p4passwd
p4extra_views
p4client
p4line_end
The Git
build step clones or updates a Git repository and checks out the specified branch or revision. Note
that the buildbot supports Git version 1.2.0 and later: earlier
versions (such as the one shipped in Ubuntu 'Dapper') do not support
the git init command that the buildbot uses.
The Git step takes the following arguments:
repourl
branch
ignore_ignores
submodules
reference
shallow
--depth 1
). If the
user/scheduler asks for a specific revision, this parameter is ignored.
progress
--progress
) flag to (git fetch
). This
solves issues of long fetches being killed due to lack of output, but requires
Git 1.7.2 or later.
The BK
build step performs a BitKeeper
checkout or update.
The BitKeeper step takes the following arguments:
repourl
baseURL
is provided): the URL at which the
BitKeeper source repository is available.
baseURL
repourl
is provided): the base repository URL,
to which a branch name will be appended. It should probably end in a
slash.
Most interesting steps involve exectuing a process of some sort on the
buildslave. The ShellCommand
class handles this activity.
Several subclasses of ShellCommand are provided as starting points for common build steps.
This is a useful base class for just about everything you might want to do during a build (except for the initial source checkout). It runs a single command in a child shell on the buildslave. All stdout/stderr is recorded into a LogFile. The step finishes with a status of FAILURE if the command's exit code is non-zero, otherwise it has a status of SUCCESS.
The preferred way to specify the command is with a list of argv strings, since this allows for spaces in filenames and avoids doing any fragile shell-escaping. You can also specify the command with a single string, in which case the string is given to '/bin/sh -c COMMAND' for parsing.
On Windows, commands are run via cmd.exe /c
which works well. However,
if you're running a batch file, the error level does not get propagated
correctly unless you add 'call' before your batch file's name:
cmd=['call', 'myfile.bat', ...]
.
ShellCommand
arguments:
command
workdir
buildbot create-slave
,
see Creating a buildslave) plus the builder's basedir (set in the
builder's c['builddir']
key in master.cfg) plus the workdir
itself (a class-level attribute of the BuildFactory, defaults to
“build”).
For example:
f.addStep(ShellCommand(command=["make", "test"], workdir="build/tests"))
env
f.addStep(ShellCommand(command=["make", "test"], env={'LANG': 'fr_FR'}))
These variable settings will override any existing ones in the buildslave's environment or the environment specified in the Builder. The exception is PYTHONPATH, which is merged with (actually prepended to) any existing $PYTHONPATH setting. The value is treated as a list of directories to prepend, and a single string is treated like a one-item list. For example, to prepend both /usr/local/lib/python2.3 and /home/buildbot/lib/python to any existing $PYTHONPATH setting, you would do something like the following:
f.addStep(ShellCommand( command=["make", "test"], env={'PYTHONPATH': ["/usr/local/lib/python2.3", "/home/buildbot/lib/python"] }))
Those variables support expansion so that if you just want to prepend
/home/buildbot/bin to the PATH environment variable, you can do
it by putting the value ${PATH}
at the end of the string like
in the example below. Variables that doesn't exists on the slave will be
replaced by ""
.
f.addStep(ShellCommand( command=["make", "test"], env={'PATH': "/home/buildbot/bin:${PATH}"}))
want_stdout
want_stderr
want_stdout
but for stderr. Note that commands run through
a PTY do not have separate stdout/stderr streams: both are merged into
stdout.
usePTY
pty
? The default is to observe the
configuration of the client (see Buildslave Options), but specifying
True
or False
here will override the default.
The advantage of using a PTY is that “grandchild” processes are more likely
to be cleaned up if the build is interrupted or times out (since it enables the
use of a “process group” in which all child processes will be placed). The
disadvantages: some forms of Unix have problems with PTYs, some of your unit
tests may behave differently when run under a PTY (generally those which check
to see if they are being run interactively), and PTYs will merge the stdout and
stderr streams into a single output stream (which means the red-vs-black
coloring in the logfiles will be lost).
logfiles
The logfiles=
argument allows you to collect data from these
secondary logfiles in near-real-time, as the step is running. It
accepts a dictionary which maps from a local Log name (which is how
the log data is presented in the build results) to either a remote filename
(interpreted relative to the build's working directory), or a dictionary
of options. Each named file will be polled on a regular basis (every couple
of seconds) as the build runs, and any new text will be sent over to the
buildmaster.
If you provide a dictionary of options instead of a string, you must specify
the filename
key. You can optionally provide a follow
key which
is a boolean controlling whether a logfile is followed or concatenated in its
entirety. Following is appropriate for logfiles to which the build step will
append, where the pre-existing contents are not interesting. The default value
for follow
is False
, which gives the same behavior as just
providing a string filename.
f.addStep(ShellCommand( command=["make", "test"], logfiles={"triallog": "_trial_temp/test.log"}))
f.addStep(ShellCommand( command=["make", "test"], logfiles={"triallog": {"filename": "_trial_temp/test.log", "follow": True,}}))
lazylogfiles
True
, logfiles will be tracked lazily, meaning that they will
only be added when and if something is written to them. This can be used to
suppress the display of empty or missing log files. The default is False
.
timeout
maxTime
description
descriptionDone
description
, this may either be a list of short strings or a
single string.
If neither description
nor descriptionDone
are set, the
actual command arguments will be used to construct the description.
This may be a bit too wide to fit comfortably on the Waterfall
display.
f.addStep(ShellCommand(command=["make", "test"], description=["testing"], descriptionDone=["tests"]))
logEnviron
logEnviron=False
.
This is intended to handle the ./configure
step from
autoconf-style projects, or the perl Makefile.PL
step from perl
MakeMaker.pm-style modules. The default command is ./configure
but you can change this by providing a command=
parameter.
This is meant to handle compiling or building a project written in C.
The default command is make all
. When the compile is finished,
the log file is scanned for GCC warning messages, a summary log is
created with any problems that were seen, and the step is marked as
WARNINGS if any were discovered. Through the WarningCountingShellCommand
superclass, the number of warnings is stored in a Build Property named
“warnings-count”, which is accumulated over all Compile steps (so if two
warnings are found in one step, and three are found in another step, the
overall build will have a “warnings-count” property of 5).
The default regular expression used to detect a warning is
'.*warning[: ].*'
, which is fairly liberal and may cause
false-positives. To use a different regexp, provide a
warningPattern=
argument, or use a subclass which sets the
warningPattern
attribute:
f.addStep(Compile(command=["make", "test"], warningPattern="^Warning: "))
The warningPattern=
can also be a pre-compiled python regexp
object: this makes it possible to add flags like re.I
(to use
case-insensitive matching).
Note that the compiled warningPattern
will have its match
method
called, which is subtly different from a search
. Your regular
expression must match the from the beginning of the line. This means that to
look for the word "warning" in the middle of a line, you will need to
prepend '.*'
to your regular expression.
The suppressionFile=
argument can be specified as the (relative) path
of a file inside the workdir defining warnings to be suppressed from the
warning counting and log file. The file will be uploaded to the master from
the slave before compiling, and any warning matched by a line in the
suppression file will be ignored. This is useful to accept certain warnings
(eg. in some special module of the source tree or in cases where the compiler
is being particularly stupid), yet still be able to easily detect and fix the
introduction of new warnings.
The file must contain one line per pattern of warnings to ignore. Empty lines
and lines beginning with #
are ignored. Other lines must consist of a
regexp matching the file name, followed by a colon (:
), followed by a
regexp matching the text of the warning. Optionally this may be followed by
another colon and a line number range. For example:
# Sample warning suppression file mi_packrec.c : .*result of 32-bit shift implicitly converted to 64 bits.* : 560-600 DictTabInfo.cpp : .*invalid access to non-static.* kernel_types.h : .*only defines private constructors and has no friends.* : 51
If no line number range is specified, the pattern matches the whole file; if only one number is given it matches only on that line.
The default warningPattern regexp only matches the warning text, so line
numbers and file names are ignored. To enable line number and file name
matching, privide a different regexp and provide a function (callable) as the
argument of warningExtractor=
. The function is called with three
arguments: the BuildStep object, the line in the log file with the warning,
and the SRE_Match
object of the regexp search for warningPattern
. It
should return a tuple (filename, linenumber, warning_test)
. For
example:
f.addStep(Compile(command=["make"], warningPattern="^(.*?):([0-9]+): [Ww]arning: (.*)$", warningExtractor=Compile.warnExtractFromRegexpGroups, suppressionFile="support-files/compiler_warnings.supp"))
(Compile.warnExtractFromRegexpGroups
is a pre-defined function that
returns the filename, linenumber, and text from groups (1,2,3) of the regexp
match).
In projects with source files in multiple directories, it is possible to get
full path names for file names matched in the suppression file, as long as the
build command outputs the names of directories as they are entered into and
left again. For this, specify regexps for the arguments
directoryEnterPattern=
and directoryLeavePattern=
. The
directoryEnterPattern=
regexp should return the name of the directory
entered into in the first matched group. The defaults, which are suitable for
GNU Make, are these:
directoryEnterPattern = "make.*: Entering directory [\"`'](.*)['`\"]" directoryLeavePattern = "make.*: Leaving directory"
(TODO: this step needs to be extended to look for GCC error messages as well, and collect them into a separate logfile, along with the source code filenames involved).
This step is meant to handle compilation using Microsoft compilers. VC++ 6-9, VS2003, VS2005, VS2008, and VCExpress9 are supported. This step will take care of setting up a clean compilation environment, parse the generated output in real time and deliver as detailed as possible information about the compilation executed.
All of the classes are in buildbot.steps.vstudio
. The available classes are:
VC6
VC7
VC8
VC9
VS2003
VC2005
VC2008
VCExpress9
The available constructor arguments are
mode
"rebuild"
, which means that first all the
remaining object files will be cleaned by the compiler. The alternate
value is "build"
, where only the updated files will be recompiled.
projectfile
config
"release"
an gives to the compiler the
configuration to use.
installdir
useenv
False
instruct the compiler
to use its own settings or the one defined through the environment
variables %PATH%
, %INCLUDE%
, and %LIB%
. If any of
the INCLUDE
or LIB
parameter is defined, this parameter
automatically switches to True
.
PATH
INCLUDE
LIB
arch
"x86''
.
Here is an example on how to use this step:
from buildbot.steps.VisualStudio import VS2005 f.addStep(VS2005( projectfile="project.sln", config="release", arch="x64", mode="build", INCLUDE=[r'D:\WINDDK\Include\wnet'], LIB=[r'D:\WINDDK\lib\wnet\amd64']))
This is meant to handle unit tests. The default command is make
test
, and the warnOnFailure
flag is set.
This is a simple command that uses the 'du' tool to measure the size of the code tree. It puts the size (as a count of 1024-byte blocks, aka 'KiB' or 'kibibytes') on the step's status text, and sets a build property named 'tree-size-KiB' with the same value.
This is a simple command that knows how to run tests of perl modules. It parses the output to determine the number of tests passed and failed and total number executed, saving the results for later query.
The process.mtrlogobserver.MTR
class is a subclass of Test
(Test). It is used to run test suites using the mysql-test-run program,
as used in MySQL, Drizzle, MariaDB, and MySQL storage engine plugins.
The shell command to run the test suite is specified in the same way as for the Test class. The MTR class will parse the output of running the test suite, and use the count of tests executed so far to provide more accurate completion time estimates. Any test failures that occur during the test are summarized on the Waterfall Display.
Server error logs are added as additional log files, useful to debug test failures.
Optionally, data about the test run and any test failures can be inserted into
a database for further analysis and report generation. To use this facility,
create an instance of twisted.enterprise.adbapi.ConnectionPool
with
connections to the database. The necessary tables can be created automatically
by setting autoCreateTables
to True
, or manually using the SQL
found in the mtrlogobserver.py source file.
One problem with specifying a database is that each reload of the
configuration will get a new instance of ConnectionPool
(even if the
connection parameters are the same). To avoid that Buildbot thinks the builder
configuration has changed because of this, use the
process.mtrlogobserver.EqConnectionPool
subclass of
ConnectionPool
, which implements an equiality operation that avoids
this problem.
Example use:
from buildbot.process.mtrlogobserver import MTR, EqConnectionPool myPool = EqConnectionPool("MySQLdb", "host", "buildbot", "password", "db") myFactory.addStep(MTR(workdir="mysql-test", dbpool=myPool, command=["perl", "mysql-test-run.pl", "--force"]))
MTR
arguments:
textLimit
testNameLimit
parallel
--parallel
option used for mysql-test-run.pl (number of processes
used to run the test suite in parallel). Defaults to 4. This is used to
determine the number of server error log files to download from the
slave. Specifying a too high value does not hurt (as nonexisting error logs
will be ignored), however if using --parallel
value greater than the default
it needs to be specified, or some server error logs will be missing.
dbpool
autoCreateTables
dbpool
is specified), the
necessary database tables will be created automatically if they do not exist
already. Alternatively, the tables can be created manually from the SQL
statements found in the mtrlogobserver.py source file.
test_type
test_info
mtr_subdir
This buildstep is similar to ShellCommand, except that it captures the output of the command into a property. It is usually used like this:
from buildbot.steps import shell f.addStep(shell.SetProperty(command="uname -a", property="uname"))
This runs uname -a
and captures its stdout, stripped of leading
and trailing whitespace, in the property "uname". To avoid stripping,
add strip=False
. The property
argument can be specified
as a WithProperties
object.
The more advanced usage allows you to specify a function to extract properties from the command output. Here you can use regular expressions, string interpolation, or whatever you would like. The function is called with three arguments: the exit status of the command, its standard output as a string, and its standard error as a string. It should return a dictionary containing all new properties.
def glob2list(rc, stdout, stderr): jpgs = [ l.strip() for l in stdout.split('\n') ] return { 'jpgs' : jpgs } f.addStep(SetProperty(command="ls -1 *.jpg", extract_fn=glob2list))
Note that any ordering relationship of the contents of stdout and stderr is lost. For example, given
f.addStep(SetProperty( command="echo output1; echo error >&2; echo output2", extract_fn=my_extract))
Then my_extract
will see stdout="output1\noutput2\n"
and stderr="error\n"
.
This buildstep is similar to ShellCommand, except that it runs the log content through a subunit filter to extract test and failure counts.
from buildbot.steps.subunit import SubunitShellCommand f.addStep(SubunitShellCommand(command="make test"))
This runs make test
and filters it through subunit. The 'tests' and
'test failed' progress metrics will now accumulate test data from the test run.
Here are some BuildSteps that are specifcally useful for projects implemented in Python.
epydoc is a tool for generating API documentation for Python modules from their docstrings. It reads all the .py files from your source tree, processes the docstrings therein, and creates a large tree of .html files (or a single .pdf file).
The buildbot.steps.python.BuildEPYDoc
step will run
epydoc to produce this API documentation, and will count the
errors and warnings from its output.
You must supply the command line to be used. The default is make epydocs, which assumes that your project has a Makefile with an “epydocs” target. You might wish to use something like epydoc -o apiref source/PKGNAME instead. You might also want to add --pdf to generate a PDF file instead of a large tree of HTML files.
The API docs are generated in-place in the build tree (under the workdir, in the subdirectory controlled by the “-o” argument). To make them useful, you will probably have to copy them to somewhere they can be read. A command like rsync -ad apiref/ dev.example.com:~public_html/current-apiref/ might be useful. You might instead want to bundle them into a tarball and publish it in the same place where the generated install tarball is placed.
from buildbot.steps.python import BuildEPYDoc ... f.addStep(BuildEPYDoc(command=["epydoc", "-o", "apiref", "source/mypkg"]))
PyFlakes is a tool to perform basic static analysis of Python code to look for simple errors, like missing imports and references of undefined names. It is like a fast and simple form of the C “lint” program. Other tools (like pychecker) provide more detailed results but take longer to run.
The buildbot.steps.python.PyFlakes
step will run pyflakes and
count the various kinds of errors and warnings it detects.
You must supply the command line to be used. The default is make pyflakes, which assumes you have a top-level Makefile with a “pyflakes” target. You might want to use something like pyflakes . or pyflakes src.
from buildbot.steps.python import PyFlakes ... f.addStep(PyFlakes(command=["pyflakes", "src"]))
Similarly, the buildbot.steps.python.PyLint
step will run pylint and
analyze the results.
You must supply the command line to be used. There is no default.
from buildbot.steps.python import PyLint ... f.addStep(PyLint(command=["pylint", "src"]))
This step runs a unit test suite using trial
, a unittest-like testing
framework that is a component of Twisted Python. Trial is used to implement
Twisted's own unit tests, and is the unittest-framework of choice for many
projects that use Twisted internally.
Projects that use trial typically have all their test cases in a 'test'
subdirectory of their top-level library directory. For example, for a package
petmail
, the tests might be in petmail/test/test_*.py
. More
complicated packages (like Twisted itself) may have multiple test directories,
like twisted/test/test_*.py
for the core functionality and
twisted/mail/test/test_*.py
for the email-specific tests.
To run trial tests manually, you run the trial
executable and tell it
where the test cases are located. The most common way of doing this is with a
module name. For petmail, this might look like trial petmail.test
, which
would locate all the test_*.py
files under petmail/test/
, running
every test case it could find in them. Unlike the unittest.py
that
comes with Python, it is not necessary to run the test_foo.py
as a
script; you always let trial do the importing and running. The step's
tests
parameter controls which tests trial will run: it can be a string
or a list of strings.
To find the test cases, the Python search path must allow something like
import petmail.test
to work. For packages that don't use a separate
top-level lib
directory, PYTHONPATH=.
will work, and will use the
test cases (and the code they are testing) in-place.
PYTHONPATH=build/lib
or PYTHONPATH=build/lib.somearch
are also
useful when you do a python setup.py build
step first. The
testpath
attribute of this class controls what PYTHONPATH
is set
to before running trial
.
Trial has the ability, through the --testmodule
flag, to run only the
set of test cases named by special test-case-name
tags in source files.
We can get the list of changed source files from our parent Build and provide
them to trial, thus running the minimal set of test cases needed to cover the
Changes. This is useful for quick builds, especially in trees with a lot of
test cases. The testChanges
parameter controls this feature: if set, it
will override tests
.
The trial executable itself is typically just trial
, and is typically
found in the shell search path. It can be overridden with the trial
parameter. This is useful for Twisted's own unittests, which want to use the
copy of bin/trial that comes with the sources.
To influence the version of python being used for the tests, or to add flags to
the command, set the python
parameter. This can be a string (like
python2.2
) or a list (like ['python2.3', '-Wall']
).
Trial creates and switches into a directory named _trial_temp/
before
running the tests, and sends the twisted log (which includes all exceptions) to
a file named test.log
. This file will be pulled up to the master where
it can be seen as part of the status output.
from buildbot.steps.python_twisted import Trial f.addStep(Trial(tests='petmail.test'))
This is a simple built-in step that will remove .pyc
files from the
workdir. This is useful in builds that update their source (and thus do not
automatically delete .pyc
files) but where some part of the build
process is dynamically searching for Python modules. Notably, trial has a bad
habit of finding old test modules.
from buildbot.steps.python_twisted import RemovePYCs f.addStep(RemovePYCs())
Most of the work involved in a build will take place on the
buildslave. But occasionally it is useful to do some work on the
buildmaster side. The most basic way to involve the buildmaster is
simply to move a file from the slave to the master, or vice versa.
There are a pair of BuildSteps named FileUpload
and
FileDownload
to provide this functionality. FileUpload
moves a file up to the master, while FileDownload
moves
a file down from the master.
As an example, let's assume that there is a step which produces an HTML file within the source tree that contains some sort of generated project documentation. We want to move this file to the buildmaster, into a ~/public_html directory, so it can be visible to developers. This file will wind up in the slave-side working directory under the name docs/reference.html. We want to put it into the master-side ~/public_html/ref.html.
from buildbot.steps.shell import ShellCommand from buildbot.steps.transfer import FileUpload f.addStep(ShellCommand(command=["make", "docs"])) f.addStep(FileUpload(slavesrc="docs/reference.html", masterdest="~/public_html/ref.html"))
The masterdest=
argument will be passed to os.path.expanduser,
so things like “~” will be expanded properly. Non-absolute paths
will be interpreted relative to the buildmaster's base directory.
Likewise, the slavesrc=
argument will be expanded and
interpreted relative to the builder's working directory.
To move a file from the master to the slave, use the
FileDownload
command. For example, let's assume that some step
requires a configuration file that, for whatever reason, could not be
recorded in the source code repository or generated on the buildslave
side:
from buildbot.steps.shell import ShellCommand from buildbot.steps.transfer import FileDownload f.addStep(FileDownload(mastersrc="~/todays_build_config.txt", slavedest="build_config.txt")) f.addStep(ShellCommand(command=["make", "config"]))
Like FileUpload
, the mastersrc=
argument is interpreted
relative to the buildmaster's base directory, and the
slavedest=
argument is relative to the builder's working
directory. If the buildslave is running in ~buildslave, and the
builder's “builddir” is something like tests-i386, then the
workdir is going to be ~buildslave/tests-i386/build, and a
slavedest=
of foo/bar.html will get put in
~buildslave/tests-i386/build/foo/bar.html. Both of these commands
will create any missing intervening directories.
The maxsize=
argument lets you set a maximum size for the file
to be transferred. This may help to avoid surprises: transferring a
100MB coredump when you were expecting to move a 10kB status file
might take an awfully long time. The blocksize=
argument
controls how the file is sent over the network: larger blocksizes are
slightly more efficient but also consume more memory on each end, and
there is a hard-coded limit of about 640kB.
The mode=
argument allows you to control the access permissions
of the target file, traditionally expressed as an octal integer. The
most common value is probably 0755, which sets the “x” executable
bit on the file (useful for shell scripts and the like). The default
value for mode=
is None, which means the permission bits will
default to whatever the umask of the writing process is. The default
umask tends to be fairly restrictive, but at least on the buildslave
you can make it less restrictive with a –umask command-line option at
creation time (see Buildslave Options).
To transfer complete directories from the buildslave to the master, there
is a BuildStep named DirectoryUpload
. It works like FileUpload
,
just for directories. As an example, let's assume a step has
generated project documentation, which consists of many files (like the output
of doxygen or epydoc). We want to move the entire documentation to the
buildmaster, into a ~/public_html/docs
directory. On the slave-side
the directory can be found under docs
:
from buildbot.steps.shell import ShellCommand from buildbot.steps.transfer import DirectoryUpload f.addStep(ShellCommand(command=["make", "docs"])) f.addStep(DirectoryUpload(slavesrc="docs", masterdest="~/public_html/docs"))
The DirectoryUpload step will create all necessary directories and transfers empty directories, too.
The maxsize
and blocksize
parameters are the same as for
FileUpload
, although note that the size of the transferred data is
implementation-dependent, and probably much larger than you expect due to the
encoding used (currently tar).
The optional compress
argument can be given as 'gz'
or
'bz2'
to compress the datastream.
Sometimes it is useful to transfer a calculated value from the master
to the slave. Instead of having to create a temporary file and then
use FileDownload
, you can use one of the string download steps.
StringDownload
works just like FileDownload
except it
takes a single argument, s
, representing the string to download
instead of a mastersrc
argument.
JSONStringDownload
is similar, except it takes an o
argument, which must be json serializable, and transfers that as a
json-encoded string to the slave.
JSONPropertiesDownload
transfers a json-encoded string that
represents a dictionary where properties
maps to a dictionary
of build property name to property value; and sourcestamp
represents the build's sourcestamp.
Occasionally, it is useful to execute some task on the master, for example to
create a directory, deploy a build result, or trigger some other centralized
processing. This is possible, in a limited fashion, with the
MasterShellCommand
step.
This step operates similarly to a regular ShellCommand
, but executes on
the master, instead of the slave. To be clear, the enclosing Build
object must still have a slave object, just as for any other step – only, in
this step, the slave does not do anything.
In this example, the step renames a tarball based on the day of the week.
from buildbot.steps.transfer import FileUpload from buildbot.steps.master import MasterShellCommand f.addStep(FileUpload(slavesrc="widgetsoft.tar.gz", masterdest="/var/buildoutputs/widgetsoft-new.tar.gz")) f.addStep(MasterShellCommand(command=""" cd /var/buildoutputs; mv widgetsoft-new.tar.gz widgetsoft-`date +%a`.tar.gz"""))
Note that, by default, this step passes a copy of the buildmaster's environment
variables to the subprocess. To pass an explicit environment instead, add an
env={..}
argument.
The counterpart to the Triggerable described in section see Triggerable Scheduler is the Trigger BuildStep.
from buildbot.steps.trigger import Trigger f.addStep(Trigger(schedulerNames=['build-prep'], waitForFinish=True, updateSourceStamp=True, set_properties={ 'quick' : False }, copy_properties=[ 'release_code_name' ]))
The schedulerNames=
argument lists the Triggerables
that should be triggered when this step is executed. Note that
it is possible, but not advisable, to create a cycle where a build
continually triggers itself, because the schedulers are specified
by name.
If waitForFinish
is True, then the step will not finish until
all of the builds from the triggered schedulers have finished. If this
argument is False (the default) or not given, then the buildstep
succeeds immediately after triggering the schedulers.
If updateSourceStamp
is True (the default), then step updates
the SourceStamp given to the Triggerables to include
got_revision
(the revision actually used in this build) as
revision
(the revision to use in the triggered builds). This is
useful to ensure that all of the builds use exactly the same
SourceStamp, even if other Changes have occurred while the build was
running.
Two parameters allow control of the properties that are passed to the triggered
scheduler. To simply copy properties verbatim, list them in the
copy_properties
parameter. To set properties explicitly, use the more
sophisticated set_properties
, which takes a dictionary mapping property
names to values. You may use WithProperties
here to dynamically
construct new property values.
A number of steps do not fall into any particular category.
The HLint step runs Twisted Lore, a lint-like checker over a set of
.xhtml
files. Any deviations from recommended style is flagged and put
in the output log.
The step looks at the list of changes in the build to determine which files to
check - it does not check all files. It specifically excludes any .xhtml
files in the top-level sandbox/
directory.
The step takes a single, optional, parameter: python
. This specifies the
Python executable to use to run Lore.
from buildbot.steps.python_twisted import HLint f.addStep(HLint())
While it is a good idea to keep your build process self-contained in the source code tree, sometimes it is convenient to put more intelligence into your Buildbot configuration. One way to do this is to write a custom BuildStep. Once written, this Step can be used in the master.cfg file.
The best reason for writing a custom BuildStep is to better parse the
results of the command being run. For example, a BuildStep that knows
about JUnit could look at the logfiles to determine which tests had
been run, how many passed and how many failed, and then report more
detailed information than a simple rc==0
-based “good/bad”
decision.
BuildStep classes have some extra equipment, because they are their own factories. Consider the use of a BuildStep in master.cfg:
f.addStep(MyStep(someopt="stuff", anotheropt=1))
This creates a single instance of class MyStep
. However, Buildbot needs
a new object each time the step is executed. this is accomplished by storing
the information required to instantiate a new object in the factory
attribute. When the time comes to construct a new Build, BuildFactory consults
this attribute (via getStepFactory
) and instantiates a new step object.
When writing a new step class, then, keep in mind are that you cannot do
anything "interesting" in the constructor – limit yourself to checking and
storing arguments. To ensure that these arguments are provided to any new
objects, call self.addFactoryArguments
with any keyword arguments your
constructor needs.
Keep a **kwargs
argument on the end of your options, and pass that up to
the parent class's constructor.
The whole thing looks like this:
class Frobinfy(LoggingBuildStep): def __init__(self, frob_what="frobee", frob_how_many=None, frob_how=None, **kwargs): # check if frob_how_many is None: raise TypeError("Frobinfy argument how_many is required") # call parent LoggingBuildStep.__init__(self, **kwargs) # set Frobnify attributes self.frob_what = frob_what self.frob_how_many = how_many self.frob_how = frob_how # and record arguments for later self.addFactoryArguments( frob_what=frob_what, frob_how_many=frob_how_many, frob_how=frob_how) class FastFrobnify(Frobnify): def __init__(self, speed=5, **kwargs) Frobnify.__init__(self, **kwargs) self.speed = speed self.addFactoryArguments( speed=speed)
Each BuildStep has a collection of “logfiles”. Each one has a short name, like “stdio” or “warnings”. Each LogFile contains an arbitrary amount of text, usually the contents of some output file generated during a build or test step, or a record of everything that was printed to stdout/stderr during the execution of some command.
These LogFiles are stored to disk, so they can be retrieved later.
Each can contain multiple “channels”, generally limited to three basic ones: stdout, stderr, and “headers”. For example, when a ShellCommand runs, it writes a few lines to the “headers” channel to indicate the exact argv strings being run, which directory the command is being executed in, and the contents of the current environment variables. Then, as the command runs, it adds a lot of “stdout” and “stderr” messages. When the command finishes, a final “header” line is added with the exit code of the process.
Status display plugins can format these different channels in different ways. For example, the web page shows LogFiles as text/html, with header lines in blue text, stdout in black, and stderr in red. A different URL is available which provides a text/plain format, in which stdout and stderr are collapsed together, and header lines are stripped completely. This latter option makes it easy to save the results to a file and run grep or whatever against the output.
Each BuildStep contains a mapping (implemented in a python dictionary) from LogFile name to the actual LogFile objects. Status plugins can get a list of LogFiles to display, for example, a list of HREF links that, when clicked, provide the full contents of the LogFile.
The most common way for a custom BuildStep to use a LogFile is to summarize the results of a ShellCommand (after the command has finished running). For example, a compile step with thousands of lines of output might want to create a summary of just the warning messages. If you were doing this from a shell, you would use something like:
grep "warning:" output.log >warnings.log
In a custom BuildStep, you could instead create a “warnings” LogFile
that contained the same text. To do this, you would add code to your
createSummary
method that pulls lines from the main output log
and creates a new LogFile with the results:
def createSummary(self, log): warnings = [] for line in log.readlines(): if "warning:" in line: warnings.append() self.addCompleteLog('warnings', "".join(warnings))
This example uses the addCompleteLog
method, which creates a
new LogFile, puts some text in it, and then “closes” it, meaning
that no further contents will be added. This LogFile will appear in
the HTML display under an HREF with the name “warnings”, since that
is the name of the LogFile.
You can also use addHTMLLog
to create a complete (closed)
LogFile that contains HTML instead of plain text. The normal LogFile
will be HTML-escaped if presented through a web page, but the HTML
LogFile will not. At the moment this is only used to present a pretty
HTML representation of an otherwise ugly exception traceback when
something goes badly wrong during the BuildStep.
In contrast, you might want to create a new LogFile at the beginning
of the step, and add text to it as the command runs. You can create
the LogFile and attach it to the build by calling addLog
, which
returns the LogFile object. You then add text to this LogFile by
calling methods like addStdout
and addHeader
. When you
are done, you must call the finish
method so the LogFile can be
closed. It may be useful to create and populate a LogFile like this
from a LogObserver method See Adding LogObservers.
The logfiles=
argument to ShellCommand
(see
see ShellCommand) creates new LogFiles and fills them in realtime
by asking the buildslave to watch a actual file on disk. The
buildslave will look for additions in the target file and report them
back to the BuildStep. These additions will be added to the LogFile by
calling addStdout
. These secondary LogFiles can be used as the
source of a LogObserver just like the normal “stdio” LogFile.
Once a LogFile has been added to a BuildStep with addLog()
,
addCompleteLog()
, addHTMLLog()
, or logfiles=
,
your BuildStep can retrieve it by using getLog()
:
class MyBuildStep(ShellCommand): logfiles = { "nodelog": "_test/node.log" } def evaluateCommand(self, cmd): nodelog = self.getLog("nodelog") if "STARTED" in nodelog.getText(): return SUCCESS else: return FAILURE
For a complete list of the methods you can call on a LogFile, please
see the docstrings on the IStatusLog
class in
buildbot/interfaces.py.
Most shell commands emit messages to stdout or stderr as they operate,
especially if you ask them nicely with a --verbose
flag of some
sort. They may also write text to a log file while they run. Your
BuildStep can watch this output as it arrives, to keep track of how
much progress the command has made. You can get a better measure of
progress by counting the number of source files compiled or test cases
run than by merely tracking the number of bytes that have been written
to stdout. This improves the accuracy and the smoothness of the ETA
display.
To accomplish this, you will need to attach a LogObserver
to
one of the log channels, most commonly to the “stdio” channel but
perhaps to another one which tracks a log file. This observer is given
all text as it is emitted from the command, and has the opportunity to
parse that output incrementally. Once the observer has decided that
some event has occurred (like a source file being compiled), it can
use the setProgress
method to tell the BuildStep about the
progress that this event represents.
There are a number of pre-built LogObserver
classes that you
can choose from (defined in buildbot.process.buildstep
, and of
course you can subclass them to add further customization. The
LogLineObserver
class handles the grunt work of buffering and
scanning for end-of-line delimiters, allowing your parser to operate
on complete stdout/stderr lines. (Lines longer than a set maximum
length are dropped; the maximum defaults to 16384 bytes, but you can
change it by calling setMaxLineLength()
on your
LogLineObserver
instance. Use sys.maxint
for effective
infinity.)
For example, let's take a look at the TrialTestCaseCounter
, which is
used by the Trial step (see Trial) to count test cases as they are
run. As Trial executes, it emits lines like the following:
buildbot.test.test_config.ConfigTest.testDebugPassword ... [OK] buildbot.test.test_config.ConfigTest.testEmpty ... [OK] buildbot.test.test_config.ConfigTest.testIRC ... [FAIL] buildbot.test.test_config.ConfigTest.testLocks ... [OK]
When the tests are finished, trial emits a long line of “======” and then some lines which summarize the tests that failed. We want to avoid parsing these trailing lines, because their format is less well-defined than the “[OK]” lines.
The parser class looks like this:
from buildbot.process.buildstep import LogLineObserver class TrialTestCaseCounter(LogLineObserver): _line_re = re.compile(r'^([\w\.]+) \.\.\. \[([^\]]+)\]$') numTests = 0 finished = False def outLineReceived(self, line): if self.finished: return if line.startswith("=" * 40): self.finished = True return m = self._line_re.search(line.strip()) if m: testname, result = m.groups() self.numTests += 1 self.step.setProgress('tests', self.numTests)
This parser only pays attention to stdout, since that's where trial
writes the progress lines. It has a mode flag named finished
to
ignore everything after the “====” marker, and a scary-looking
regular expression to match each line while hopefully ignoring other
messages that might get displayed as the test runs.
Each time it identifies a test has been completed, it increments its
counter and delivers the new progress value to the step with
self.step.setProgress
. This class is specifically measuring
progress along the “tests” metric, in units of test cases (as
opposed to other kinds of progress like the “output” metric, which
measures in units of bytes). The Progress-tracking code uses each
progress metric separately to come up with an overall completion
percentage and an ETA value.
To connect this parser into the Trial
BuildStep,
Trial.__init__
ends with the following clause:
# this counter will feed Progress along the 'test cases' metric counter = TrialTestCaseCounter() self.addLogObserver('stdio', counter) self.progressMetrics += ('tests',)
This creates a TrialTestCaseCounter and tells the step that the
counter wants to watch the “stdio” log. The observer is
automatically given a reference to the step in its .step
attribute.
Let's say that we've got some snazzy new unit-test framework called Framboozle. It's the hottest thing since sliced bread. It slices, it dices, it runs unit tests like there's no tomorrow. Plus if your unit tests fail, you can use its name for a Web 2.1 startup company, make millions of dollars, and hire engineers to fix the bugs for you, while you spend your afternoons lazily hang-gliding along a scenic pacific beach, blissfully unconcerned about the state of your tests.6
To run a Framboozle-enabled test suite, you just run the 'framboozler' command from the top of your source code tree. The 'framboozler' command emits a bunch of stuff to stdout, but the most interesting bit is that it emits the line "FNURRRGH!" every time it finishes running a test case7. You'd like to have a test-case counting LogObserver that watches for these lines and counts them, because counting them will help the buildbot more accurately calculate how long the build will take, and this will let you know exactly how long you can sneak out of the office for your hang-gliding lessons without anyone noticing that you're gone.
This will involve writing a new BuildStep (probably named "Framboozle") which inherits from ShellCommand. The BuildStep class definition itself will look something like this:
# START from buildbot.steps.shell import ShellCommand from buildbot.process.buildstep import LogLineObserver class FNURRRGHCounter(LogLineObserver): numTests = 0 def outLineReceived(self, line): if "FNURRRGH!" in line: self.numTests += 1 self.step.setProgress('tests', self.numTests) class Framboozle(ShellCommand): command = ["framboozler"] def __init__(self, **kwargs): ShellCommand.__init__(self, **kwargs) # always upcall! counter = FNURRRGHCounter()) self.addLogObserver('stdio', counter) self.progressMetrics += ('tests',) # FINISH
So that's the code that we want to wind up using. How do we actually deploy it?
You have a couple of different options.
Option 1: The simplest technique is to simply put this text (everything from START to FINISH) in your master.cfg file, somewhere before the BuildFactory definition where you actually use it in a clause like:
f = BuildFactory() f.addStep(SVN(svnurl="stuff")) f.addStep(Framboozle())
Remember that master.cfg is secretly just a python program with one job: populating the BuildmasterConfig dictionary. And python programs are allowed to define as many classes as they like. So you can define classes and use them in the same file, just as long as the class is defined before some other code tries to use it.
This is easy, and it keeps the point of definition very close to the point of use, and whoever replaces you after that unfortunate hang-gliding accident will appreciate being able to easily figure out what the heck this stupid "Framboozle" step is doing anyways. The downside is that every time you reload the config file, the Framboozle class will get redefined, which means that the buildmaster will think that you've reconfigured all the Builders that use it, even though nothing changed. Bleh.
Option 2: Instead, we can put this code in a separate file, and import it into the master.cfg file just like we would the normal buildsteps like ShellCommand and SVN.
Create a directory named ~/lib/python, put everything from START to FINISH in ~/lib/python/framboozle.py, and run your buildmaster using:
PYTHONPATH=~/lib/python buildbot start MASTERDIR
or use the Makefile.buildbot to control the way buildbot start works. Or add something like this to something like your ~/.bashrc or ~/.bash_profile or ~/.cshrc:
export PYTHONPATH=~/lib/python
Once we've done this, our master.cfg can look like:
from framboozle import Framboozle f = BuildFactory() f.addStep(SVN(svnurl="stuff")) f.addStep(Framboozle())
or:
import framboozle f = BuildFactory() f.addStep(SVN(svnurl="stuff")) f.addStep(framboozle.Framboozle())
(check out the python docs for details about how "import" and "from A import B" work).
What we've done here is to tell python that every time it handles an "import" statement for some named module, it should look in our ~/lib/python/ for that module before it looks anywhere else. After our directories, it will try in a bunch of standard directories too (including the one where buildbot is installed). By setting the PYTHONPATH environment variable, you can add directories to the front of this search list.
Python knows that once it "import"s a file, it doesn't need to re-import it again. This means that reconfiguring the buildmaster (with "buildbot reconfig", for example) won't make it think the Framboozle class has changed every time, so the Builders that use it will not be spuriously restarted. On the other hand, you either have to start your buildmaster in a slightly weird way, or you have to modify your environment to set the PYTHONPATH variable.
Option 3: Install this code into a standard python library directory
Find out what your python's standard include path is by asking it:
80:warner@luther% python Python 2.4.4c0 (#2, Oct 2 2006, 00:57:46) [GCC 4.1.2 20060928 (prerelease) (Debian 4.1.1-15)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> import pprint >>> pprint.pprint(sys.path) ['', '/usr/lib/python24.zip', '/usr/lib/python2.4', '/usr/lib/python2.4/plat-linux2', '/usr/lib/python2.4/lib-tk', '/usr/lib/python2.4/lib-dynload', '/usr/local/lib/python2.4/site-packages', '/usr/lib/python2.4/site-packages', '/usr/lib/python2.4/site-packages/Numeric', '/var/lib/python-support/python2.4', '/usr/lib/site-python']
In this case, putting the code into /usr/local/lib/python2.4/site-packages/framboozle.py would work just fine. We can use the same master.cfg "import framboozle" statement as in Option 2. By putting it in a standard include directory (instead of the decidedly non-standard ~/lib/python), we don't even have to set PYTHONPATH to anything special. The downside is that you probably have to be root to write to one of those standard include directories.
Option 4: Submit the code for inclusion in the Buildbot distribution
Make a fork of buildbot on http://github.com/djmitche/buildbot or post a patch in a bug at http://buildbot.net. In either case, post a note about your patch to the mailing list, so others can provide feedback and, eventually, commit it.
from buildbot.steps import framboozle f = BuildFactory() f.addStep(SVN(svnurl="stuff")) f.addStep(framboozle.Framboozle())
And then you don't even have to install framboozle.py anywhere on your system, since it will ship with Buildbot. You don't have to be root, you don't have to set PYTHONPATH. But you do have to make a good case for Framboozle being worth going into the main distribution, you'll probably have to provide docs and some unit test cases, you'll need to figure out what kind of beer the author likes, and then you'll have to wait until the next release. But in some environments, all this is easier than getting root on your buildmaster box, so the tradeoffs may actually be worth it.
Putting the code in master.cfg (1) makes it available to that buildmaster instance. Putting it in a file in a personal library directory (2) makes it available for any buildmasters you might be running. Putting it in a file in a system-wide shared library directory (3) makes it available for any buildmasters that anyone on that system might be running. Getting it into the buildbot's upstream repository (4) makes it available for any buildmasters that anyone in the world might be running. It's all a matter of how widely you want to deploy that new class.
Each BuildStep has a collection of “links”. Like its collection of LogFiles, each link has a name and a target URL. The web status page creates HREFs for each link in the same box as it does for LogFiles, except that the target of the link is the external URL instead of an internal link to a page that shows the contents of the LogFile.
These external links can be used to point at build information hosted
on other servers. For example, the test process might produce an
intricate description of which tests passed and failed, or some sort
of code coverage data in HTML form, or a PNG or GIF image with a graph
of memory usage over time. The external link can provide an easy way
for users to navigate from the buildbot's status page to these
external web sites or file servers. Note that the step itself is
responsible for insuring that there will be a document available at
the given URL (perhaps by using scp to copy the HTML output
to a ~/public_html/ directory on a remote web server). Calling
addURL
does not magically populate a web server.
To set one of these links, the BuildStep should call the addURL
method with the name of the link and the target URL. Multiple URLs can
be set.
In this example, we assume that the make test command causes a collection of HTML files to be created and put somewhere on the coverage.example.org web server, in a filename that incorporates the build number.
class TestWithCodeCoverage(BuildStep): command = ["make", "test", WithProperties("buildnum=%s", "buildnumber")] def createSummary(self, log): buildnumber = self.getProperty("buildnumber") url = "http://coverage.example.org/builds/%s.html" % buildnumber self.addURL("coverage", url)
You might also want to extract the URL from some special message output by the build process itself:
class TestWithCodeCoverage(BuildStep): command = ["make", "test", WithProperties("buildnum=%s", "buildnumber")] def createSummary(self, log): output = StringIO(log.getText()) for line in output.readlines(): if line.startswith("coverage-url:"): url = line[len("coverage-url:"):].strip() self.addURL("coverage", url) return
Note that a build process which emits both stdout and stderr might cause this line to be split or interleaved between other lines. It might be necessary to restrict the getText() call to only stdout with something like this:
output = StringIO("".join([c[1] for c in log.getChunks() if c[0] == LOG_CHANNEL_STDOUT]))
Of course if the build is run under a PTY, then stdout and stderr will be merged before the buildbot ever sees them, so such interleaving will be unavoidable.
buildbot.steps.master.MasterShellCommand
: Steps That Run on the Masterbuildbot.steps.python.BuildEPYDoc
: BuildEPYDocbuildbot.steps.python.PyFlakes
: PyFlakesbuildbot.steps.python.PyLint
: PyLintbuildbot.steps.python_twisted.HLint
: HLintbuildbot.steps.python_twisted.RemovePYCs
: RemovePYCsbuildbot.steps.python_twisted.Trial
: Trialbuildbot.steps.shell.Compile
: Compilebuildbot.steps.shell.Configure
: Configurebuildbot.steps.shell.PerlModuleTest
: PerlModuleTestbuildbot.steps.shell.SetProperty
: SetPropertybuildbot.steps.shell.ShellCommand
: Using ShellCommandsbuildbot.steps.shell.Test
: Testbuildbot.steps.shell.TreeSize
: TreeSizebuildbot.steps.source.BK
: BitKeeperbuildbot.steps.source.Bzr
: Bzrbuildbot.steps.source.CVS
: CVSbuildbot.steps.source.Darcs
: Darcsbuildbot.steps.source.Git
: Gitbuildbot.steps.source.Mercurial
: Mercurialbuildbot.steps.source.P4
: P4buildbot.steps.source.SVN
: SVNbuildbot.steps.subunit.SubunitShellCommand
: SubunitShellCommandbuildbot.steps.transfer.DirectoryUpload
: Transferring Filesbuildbot.steps.transfer.FileDownload
: Transferring Filesbuildbot.steps.transfer.FileUpload
: Transferring Filesbuildbot.steps.transfer.JSONPropertiesDownload
: Transferring Stringsbuildbot.steps.transfer.JSONStringDownload
: Transferring Stringsbuildbot.steps.transfer.StringDownload
: Transferring Stringsbuildbot.steps.trigger.Trigger
: Triggering Schedulersbuildbot.steps.vstudio.VC6
: Visual C++buildbot.steps.vstudio.VC7
: Visual C++buildbot.steps.vstudio.VC8
: Visual C++buildbot.steps.vstudio.VCExpress9
: Visual C++buildbot.steps.vstudio.VS2003
: Visual C++buildbot.steps.vstudio.VS2005
: Visual C++buildbot.steps.vstudio.VS2008
: Visual C++Until now, we assumed that a master can run builds at any slave whenever needed or desired. Some times, you want to enforce additional constraints on builds. For reasons like limited network bandwidth, old slave machines, or a self-willed data base server, you may want to limit the number of builds (or build steps) that can access a resource.
The mechanism used by Buildbot is known as the read/write lock.8 It allows either many readers or a single writer but not a combination of readers and writers. The general lock has been modified and extended for use in Buildbot. Firstly, the general lock allows an infinite number of readers. In Buildbot, we often want to put an upper limit on the number of readers, for example allowing two out of five possible builds at the same time. To do this, the lock counts the number of active readers. Secondly, the terms read mode and write mode are confusing in Buildbot context. They have been replaced by counting mode (since the lock counts them) and exclusive mode. As a result of these changes, locks in Buildbot allow a number of builds (upto some fixed number) in counting mode, or they allow one build in exclusive mode.
Note that access modes are specified when a lock is used. That is, it is possible to have a single lock that is used by several slaves in counting mode, and several slaves in exclusive mode. In fact, this is the strength of the modes: accessing a lock in exclusive mode will prevent all counting-mode accesses.
Often, not all slaves are equal. To allow for this situation, Buildbot allows to have a separate upper limit on the count for each slave. In this way, you can have at most 3 concurrent builds at a fast slave, 2 at a slightly older slave, and 1 at all other slaves.
The final thing you can specify when you introduce a new lock is its scope. Some constraints are global – they must be enforced over all slaves. Other constraints are local to each slave. A master lock is used for the global constraints. You can ensure for example that at most one build (of all builds running at all slaves) accesses the data base server. With a slave lock you can add a limit local to each slave. With such a lock, you can for example enforce an upper limit to the number of active builds at a slave, like above.
Time for a few examples. Below a master lock is defined to protect a data base, and a slave lock is created to limit the number of builds at each slave.
from buildbot import locks db_lock = locks.MasterLock("database") build_lock = locks.SlaveLock("slave_builds", maxCount = 1, maxCountForSlave = { 'fast': 3, 'new': 2 })
After importing locks from buildbot, db_lock
is defined to be a master
lock. The "database"
string is used for uniquely identifying the lock.
At the next line, a slave lock called build_lock
is created. It is
identified by the "slave_builds"
string. Since the requirements of the
lock are a bit more complicated, two optional arguments are also specified. The
maxCount
parameter sets the default limit for builds in counting mode to
1
. For the slave called 'fast'
however, we want to have at most
three builds, and for the slave called 'new'
the upper limit is two
builds running at the same time.
The next step is accessing the locks in builds. Buildbot allows a lock to be used during an entire build (from beginning to end), or only during a single build step. In the latter case, the lock is claimed for use just before the step starts, and released again when the step ends. To prevent deadlocks,9 it is not possible to claim or release locks at other times.
To use locks, you add them with a locks
argument to a build or a step.
Each use of a lock is either in counting mode (that is, possibly shared with
other builds) or in exclusive mode, and this is indicated with the syntax lock.access(mode)
, where mode
is one of "counting"
or "shared"
.
A build or build step proceeds only when it has acquired all locks. If a build or step needs a lot of locks, it may be starved10 by other builds that need fewer locks.
To illustrate use of locks, a few examples.
from buildbot import locks from buildbot.steps import source, shell from buildbot.process import factory db_lock = locks.MasterLock("database") build_lock = locks.SlaveLock("slave_builds", maxCount = 1, maxCountForSlave = { 'fast': 3, 'new': 2 }) f = factory.BuildFactory() f.addStep(source.SVN(svnurl="http://example.org/svn/Trunk")) f.addStep(shell.ShellCommand(command="make all")) f.addStep(shell.ShellCommand(command="make test", locks=[db_lock.access('exclusive')])) b1 = {'name': 'full1', 'slavename': 'fast', 'builddir': 'f1', 'factory': f, 'locks': [build_lock.access('counting')] } b2 = {'name': 'full2', 'slavename': 'new', 'builddir': 'f2', 'factory': f. 'locks': [build_lock.access('counting')] } b3 = {'name': 'full3', 'slavename': 'old', 'builddir': 'f3', 'factory': f. 'locks': [build_lock.access('counting')] } b4 = {'name': 'full4', 'slavename': 'other', 'builddir': 'f4', 'factory': f. 'locks': [build_lock.access('counting')] } c['builders'] = [b1, b2, b3, b4]
Here we have four slaves b1
, b2
, b3
, and b4
. Each
slave performs the same checkout, make, and test build step sequence.
We want to enforce that at most one test step is executed between all slaves due
to restrictions with the data base server. This is done by adding the
locks=
parameter with the third step. It takes a list of locks with their
access mode. In this case only the db_lock
is needed. The exclusive
access mode is used to ensure there is at most one slave that executes the test
step.
In addition to exclusive accessing the data base, we also want slaves to stay
responsive even under the load of a large number of builds being triggered.
For this purpose, the slave lock called build_lock
is defined. Since
the restraint holds for entire builds, the lock is specified in the builder
with 'locks': [build_lock.access('counting')]
.
Note that you will occasionally see lock.access(mode)
written as
LockAccess(lock, mode)
. The two are equivalent, but the former is
preferred.
The Buildmaster has a variety of ways to present build status to
various users. Each such delivery method is a “Status Target” object
in the configuration's status
list. To add status targets, you
just append more objects to this list:
c['status'] = [] from buildbot.status import html c['status'].append(html.Waterfall(http_port=8010)) from buildbot.status import mail m = mail.MailNotifier(fromaddr="buildbot@localhost", extraRecipients=["builds@lists.example.com"], sendToInterestedUsers=False) c['status'].append(m) from buildbot.status import words c['status'].append(words.IRC(host="irc.example.com", nick="bb", channels=["#example"]))
Most status delivery objects take a categories=
argument, which
can contain a list of “category” names: in this case, it will only
show status for Builders that are in one of the named categories.
Each of these objects should be a service.MultiService which will be attached
to the BuildMaster object when the configuration is processed. They should use
self.parent.getStatus()
to get access to the top-level IStatus object,
either inside startService
or later. They may call
status.subscribe()
in startService
to receive notifications of
builder events, in which case they must define builderAdded
and related
methods. See the docstrings in buildbot/interfaces.py for full details.
The buildbot.status.html.WebStatus
status target runs a small
web server inside the buildmaster. You can point a browser at this web
server and retrieve information about every build the buildbot knows
about, as well as find out what the buildbot is currently working on.
The first page you will see is the “Welcome Page”, which contains links to all the other useful pages. By default, this page is served from the status/web/templates/root.html file in buildbot's library area. If you'd like to override this page or the other templates found there, copy the files you're interested in into a templates/ directory in the buildmaster's base directory.
One of the most complex resource provided by WebStatus
is the
“Waterfall Display”, which shows a time-based chart of events. This
somewhat-busy display provides detailed information about all steps of all
recent builds, and provides hyperlinks to look at individual build logs and
source changes. By simply reloading this page on a regular basis, you will see
a complete description of everything the buildbot is currently working on.
A similar, but more developer-oriented display is the "Grid" display. This arranges builds by SourceStamp (horizontal axis) and builder (vertical axis), and can provide quick information as to which revisions are passing or failing on which builders.
There are also pages with more specialized information. For example, there is a page which shows the last 20 builds performed by the buildbot, one line each. Each line is a link to detailed information about that build. By adding query arguments to the URL used to reach this page, you can narrow the display to builds that involved certain branches, or which ran on certain Builders. These pages are described in great detail below.
Buildbot now uses a templating system for the web interface. The source of these templates can be found in the status/web/templates/ directory in buildbot's library area. You can override these templates by creating alternate versions in a templates/ directory within the buildmaster's base directory.
The first time a buildmaster is created, the public_html/ directory is populated with some sample files, which you will probably want to customize for your own project. These files are all static: the buildbot does not modify them in any way as it serves them to HTTP clients.
Note that templates in templates/ take precedence over static files in public_html/.
from buildbot.status.html import WebStatus c['status'].append(WebStatus(8080))
Note that the initial robots.txt file has Disallow lines for all of the dynamically-generated buildbot pages, to discourage web spiders and search engines from consuming a lot of CPU time as they crawl through the entire history of your buildbot. If you are running the buildbot behind a reverse proxy, you'll probably need to put the robots.txt file somewhere else (at the top level of the parent web server), and replace the URL prefixes in it with more suitable values.
If you would like to use an alternative root directory, add the
public_html=..
option to the WebStatus
creation:
c['status'].append(WebStatus(8080, public_html="/var/www/buildbot"))
In addition, if you are familiar with twisted.web Resource
Trees, you can write code to add additional pages at places inside
this web space. Just use webstatus.putChild
to place these
resources.
The following section describes the special URLs and the status views they provide.
Certain URLs are “magic”, and the pages they serve are created by code in various classes in the buildbot.status.web package instead of being read from disk. The most common way to access these pages is for the buildmaster admin to write or modify the index.html page to contain links to them. Of course other project web pages can contain links to these buildbot pages as well.
Many pages can be modified by adding query arguments to the URL. For example, a page which shows the results of the most recent build normally does this for all builders at once. But by appending “?builder=i386” to the end of the URL, the page will show only the results for the “i386” builder. When used in this way, you can add multiple “builder=” arguments to see multiple builders. Remembering that URL query arguments are separated from each other with ampersands, a URL that ends in “?builder=i386&builder=ppc” would show builds for just those two Builders.
The branch=
query argument can be used on some pages. This
filters the information displayed by that page down to only the builds
or changes which involved the given branch. Use branch=trunk
to
reference the trunk: if you aren't intentionally using branches,
you're probably using trunk. Multiple branch=
arguments can be
used to examine multiple branches at once (so appending
?branch=foo&branch=bar
to the URL will show builds involving
either branch). No branch=
arguments means to show builds and
changes for all branches.
Some pages may include the Builder name or the build number in the main part of the URL itself. For example, a page that describes Build #7 of the “i386” builder would live at /builders/i386/builds/7.
The table below lists all of the internal pages and the URLs that can be used to access them.
/waterfall
By adding one or more “builder=” query arguments, the Waterfall is restricted to only showing information about the given Builders. By adding one or more “branch=” query arguments, the display is restricted to showing information about the given branches. In addition, adding one or more “category=” query arguments to the URL will limit the display to Builders that were defined with one of the given categories.
A 'show_events=true' query argument causes the display to include non-Build events, like slaves attaching and detaching, as well as reconfiguration events. 'show_events=false' hides these events. The default is to show them.
By adding the 'failures_only=true' query argument, the Waterfall is restricted to only showing information about the builders that are currently failing. A builder is considered failing if the last finished build was not successful, a step in the current build(s) is failing, or if the builder is offline.
The last_time=
, first_time=
, and show_time=
arguments will control what interval of time is displayed. The default
is to show the latest events, but these can be used to look at earlier
periods in history. The num_events=
argument also provides a
limit on the size of the displayed page.
The Waterfall has references to resources many of the other portions
of the URL space: /builders for access to individual builds,
/changes for access to information about source code changes,
etc.
/grid
By adding one ore more “category=” arguments the grid will be restricted to revisions in those categories.
A “width=N” argument will limit the number of revisions shown to N, defaulting to 5.
A “branch=BRANCHNAME” argument will limit the grid to revisions on
branch BRANCHNAME.
/tgrid
This page also has a “rev_order=” query argument that
lets you change in what order revisions are shown.
Valid values are “asc” (ascending, oldest revision first)
and “desc” (descending, newest revision first).
/console
It allows a developer to quickly see the status of each builder for the first build including his or her change. A green box means that the change succeeded for all the steps for a given builder. A red box means that the changed introduced a new regression on a builder. An orange box means that at least one of the test failed, but it was also failing in the previous build, so it is not possible to see if there was any regressions from this change. Finally a yellow box means that the test is in progress.
By adding one or more “builder=” query arguments, the Console view is restricted to only showing information about the given Builders. Adding a “repository=” argument will limit display to a given repository. By adding a “branch=” query argument, the display is restricted to showing information about the given branch. In addition, adding one or more “category=” query arguments to the URL will limit the display to Builders that were defined with one of the given categories.
By adding one or more “name=” query arguments to the URL, the console view is restricted to only showing changes made by the given users.
NOTE: To use this page, your buildbot.css file in public_html must be the one found in buildbot/status/web/extended.css.
The console view is still in development. At this moment by default the view
sorts revisions lexically, which can lead to odd behavior with non-integer
revisions (e.g., git), or with integer revisions of different length (e.g., 999
and 1000). It also has some issues with displaying multiple braches at the
same time. If you do have multiple branches, you should use the “branch=”
query argument. The order_console_by_time
option may help sorting
revisions, although it depends on the date being set correctly in each commit:
w = html.WebStatus(http_port=8080, order_console_by_time=True)
/rss
/atom
/json
/json/help
for detailed interactive documentation of the output formats
for this view.
/buildstatus?builder=$BUILDERNAME&number=$BUILDNUM
/builders/$BUILDERNAME
numbuilds=
argument will control how many build lines
are displayed (5 by default).
/builders/$BUILDERNAME/builds/$BUILDNUM
/builders/$BUILDERNAME/builds/$BUILDNUM/steps/$STEPNAME
/builders/$BUILDERNAME/builds/$BUILDNUM/steps/$STEPNAME/logs/$LOGNAME
/builders/$BUILDERNAME/builds/$BUILDNUM/steps/$STEPNAME/logs/$LOGNAME/text
/changes
/changes/NN
/buildslaves
A “no_builders=1“ URL argument will omit the builders column. This is
useful if each buildslave is assigned to a large number of builders.
/one_line_per_build
One or more builder=
or branch=
arguments can be used to
restrict the list. In addition, a numbuilds=
argument will
control how many lines are displayed (20 by default).
/builders
As with /one_line_per_build
, this page will also honor
builder=
and branch=
arguments.
/about
There are also a set of web-status resources that are intended for use by other programs, rather than humans.
/change_hook
The most common way to run a WebStatus
is on a regular TCP
port. To do this, just pass in the TCP port number when you create the
WebStatus
instance; this is called the http_port
argument:
from buildbot.status.html import WebStatus c['status'].append(http_port=WebStatus(http_port=8080))
The http_port
argument is actually a “strports specification” for the
port that the web server should listen on. This can be a simple port number, or
a string like http_port="tcp:8080:interface=127.0.0.1"
(to limit
connections to the loopback interface, and therefore to clients running on the
same host)12.
If instead (or in addition) you provide the distrib_port
argument, a twisted.web distributed server will be started either on a
TCP port (if distrib_port
is like "tcp:12345"
) or more
likely on a UNIX socket (if distrib_port
is like
"unix:/path/to/socket"
).
The public_html
option gives the path to a regular directory of HTML
files that will be displayed alongside the various built-in URLs buildbot
supplies. This is most often used to supply CSS files (/buildbot.css
)
and a top-level navigational file (/index.html
), but can also serve any
other files required - even build results!
The buildbot web status is, by default, read-only. It displays lots of information, but users are not allowed to affect the operation of the buildmaster. However, there are a number of supported activities that can be enabled, and Buildbot can also perform rudimentary username/password authentication. The actions are:
forceBuild
forceAllBuilds
pingBuilder
gracefulShutdown
stopBuild
stopAllBuilds
cancelPendingBuild
stopChange
cleanShutdown
For each of these actions, you can configure buildbot to never allow the action, always allow the action, allow the action to any authenticated user, or check with a function of your creation to determine whether the action is OK.
This is all configured with the Authz
class:
from buildbot.status.html import WebStatus from buildbot.status.web.authz import Authz authz = Authz( forceBuild=True, stopBuild=True) c['status'].append(http_port=WebStatus(http_port=8080, authz=authz))
Each of the actions listed above is an option to Authz
. You can specify
False
(the default) to prohibit that action or True
to enable it.
If you do not wish to allow strangers to perform actions, but do want
developers to have such access, you will need to add some authentication
support. Pass an instance of status.web.auth.IAuth
as a auth
keyword argument to Authz
, and specify the action as "auth"
.
from buildbot.status.html import WebStatus from buildbot.status.web.authz import Authz from buildbot.status.web.auth import BasicAuth users = [('bob', 'secret-pass'), ('jill', 'super-pass')] authz = Authz(auth=BasicAuth(users), forceBuild='auth', # only authenticated users pingBuilder=True, # but anyone can do this ) c['status'].append(WebStatus(http_port=8080, authz=authz)) # or from buildbot.status.web.auth import HTPasswdAuth auth = (HTPasswdAuth('/path/to/htpasswd'))
The class BasicAuth
implements a basic authentication mechanism using a
list of user/password tuples provided from the configuration file. The class
HTPasswdAuth
implements an authentication against an .htpasswd
file.
If you need still-more flexibility, pass a function for the authentication action. That function will be called with an authenticated username and some action-specific arguments, and should return true if the action is authorized.
def canForceBuild(username, builder_status): if builder_status.getName() == 'smoketest': return True # any authenticated user can run smoketest elif username == 'releng': return True # releng can force whatever they want else: return False # otherwise, no way. authz = Authz(auth=BasicAuth(users), forceBuild=canForceBuild)
The forceBuild
and pingBuilder
actions both supply a
BuilderStatus object. The stopBuild
action supplies a BuildStatus
object. The cancelPendingBuild
action supplies a BuildRequest. The
remainder do not supply any extra arguments.
The WebStatus uses a separate log file (http.log) to avoid clutter buildbot's default log (twistd.log) with request/response messages. This log is also, by default, rotated in the same way as the twistd.log file, but you can also customize the rotation logic with the following parameters if you need a different behaviour.
An integer defining the file size at which log files are rotated.
The maximum number of old log files to keep.
These arguments adds an URL link to various places in the WebStatus, such as revisions, repositories, projects and, optionally, ticket/bug references in change comments.
The revlink
is used to create links from revision IDs in the web status
to a web-view of your source control system. The parametr's value must be a
format string, a dict mapping a string (repository name) to format strings, or a callable.
The format string should use '%s' to insert the revision id in the url. For
example, for Buildbot on github:
revlink='http://github.com/buildbot/buildbot/tree/%s'
(The revision ID
will be URL encoded before inserted in the replacement string)
The callable takes the revision id and repository argument, and should return an URL to the revision.
Note that SourceStamps that are not created from version-control changes (e.g., those created by a Nightly or Periodic scheduler) will have an empty repository string, as the respository is not known.
The changecommentlink
argument can be used to create links to
ticket-ids from change comments (i.e. #123).
The argument can either be a tuple of three strings, a dictionary mapping
strings (project names) to tuples or a callable taking a changetext
(a jinja2.Markup
instance) and a project name, returning a the
same change text with additional links/html tags added to it.
If the tuple is used, it should contain three strings where the first
element is a regex that
searches for strings (with match groups), the second is a replace-string
that, when substituted with \1 etc, yields the URL and the third
is the title attribute of the link.
(The <a href="" title=""></a>
is added by the system.)
So, for Trac tickets (#42, etc):
changecommentlink(r"#(\d+)", r"http://buildbot.net/trac/ticket/\1", r"Ticket \g<0>")
.
A dictionary from strings to strings, mapping project names to URLs, or a callable taking a project name and returning an URL.
Same as the projects arg above, a dict or callable mapping project names to URLs.
The order_console_by_time
option affects the rendering of the console;
see the description of the console above.
The numbuilds
option determines the number of builds that most status
displays will show. It can usually be overriden in the URL, e.g.,
?numbuilds=13
.
The num_events
option gives the default number of events that the
waterfall will display. The num_events_max
gives the maximum number of
events displayed, even if the web browser requests more.
The /change_hook url is a magic URL which will accept HTTP requests and translate
them into changes for buildbot. Implementations (such as a trivial json-based endpoint
and a github implementation) can be found in master/buildbot/status/web/hooks
.
The format of the url is /change_hook/DIALECT where DIALECT is a package within the
hooks directory. Change_hook is disabled by default and each DIALECT has to be enabled
separately, for security reasons
An example WebStatus configuration line which enables change_hook and two DIALECTS:
c['status'].append(html.WebStatus(http_port=8010,allowForce=True, change_hook_dialects={ 'base': True, 'github': {'option1':True, 'option2':False}}))
Within the WebStatus arguments, the change_hook
key enables/disables the module
and change_hook_dialects
whitelists DIALECTs where the keys are the module names
and the values are optional arguments which will be passed to the hooks.
The post_build_request.py script in master/contrib allows for the submission of an arbitrary change request. Run 'post_build_request.py –help' for more information. The 'base' dialect must be enabled for this to work.
The buildbot can also send email when builds finish. The most common use of this is to tell developers when their change has caused the build to fail. It is also quite common to send a message to a mailing list (usually named “builds” or similar) about every build.
The MailNotifier
status target is used to accomplish this. You
configure it by specifying who mail should be sent to, under what
circumstances mail should be sent, and how to deliver the mail. It can
be configured to only send out mail for certain builders, and only
send messages when the build fails, or when the builder transitions
from success to failure. It can also be configured to include various
build logs in each message.
By default, the message will be sent to the Interested Users list (see Doing Things With Users), which includes all developers who made changes in the
build. You can add additional recipients with the extraRecipients argument.
You can also add interested users by setting the owners
build property
to a list of users in the scheduler constructor (see Configuring Schedulers).
Each MailNotifier sends mail to a single set of recipients. To send different kinds of mail to different recipients, use multiple MailNotifiers.
The following simple example will send an email upon the completion of each build, to just those developers whose Changes were included in the build. The email contains a description of the Build, its results, and URLs where more information can be obtained.
from buildbot.status.mail import MailNotifier mn = MailNotifier(fromaddr="buildbot@example.org", lookup="example.org") c['status'].append(mn)
To get a simple one-message-per-build (say, for a mailing list), use
the following form instead. This form does not send mail to individual
developers (and thus does not need the lookup=
argument,
explained below), instead it only ever sends mail to the “extra
recipients” named in the arguments:
mn = MailNotifier(fromaddr="buildbot@example.org", sendToInterestedUsers=False, extraRecipients=['listaddr@example.org'])
If your SMTP host requires authentication before it allows you to send emails, this can also be done by specifying “smtpUser” and “smptPassword”:
mn = MailNotifier(fromaddr="myuser@gmail.com", sendToInterestedUsers=False, extraRecipients=["listaddr@example.org"], relayhost="smtp.gmail.com", smtpPort=587, smtpUser="myuser@gmail.com", smtpPassword="mypassword")
If you want to require Transport Layer Security (TLS), then you can also set “useTls”:
mn = MailNotifier(fromaddr="myuser@gmail.com", sendToInterestedUsers=False, extraRecipients=["listaddr@example.org"], useTls=True, relayhost="smtp.gmail.com", smtpPort=587, smtpUser="myuser@gmail.com", smtpPassword="mypassword")
In some cases it is desirable to have different information then what is
provided in a standard MailNotifier message. For this purpose MailNotifier
provides the argument messageFormatter
(a function) which allows for the
creation of messages with unique content.
For example, if only short emails are desired (e.g., for delivery to phones)
from buildbot.status.builder import Results def messageFormatter(mode, name, build, results, master_status): result = Results[results] text = list() text.append("STATUS: %s" % result.title()) return { 'body' : "\n".join(text), 'type' : 'plain' } mn = MailNotifier(fromaddr="buildbot@example.org", sendToInterestedUsers=False, mode='problem', extraRecipients=['listaddr@example.org'], messageFormatter=messageFormatter)
Another example of a function delivering a customized html email containing the last 80 log lines of the last build step is given below:
from buildbot.status.builder import Results def html_message_formatter(mode, name, build, results, master_status): """Provide a customized message to BuildBot's MailNotifier. The last 80 lines of the log are provided as well as the changes relevant to the build. Message content is formatted as html. """ result = Results[results] limit_lines = 80 text = list() text.append(u'<h4>Build status: %s</h4>' % result.upper()) text.append(u'<table cellspacing="10"><tr>') text.append(u"<td>Buildslave for this Build:</td><td><b>%s</b></td></tr>" % build.getSlavename()) if master_status.getURLForThing(build): text.append(u'<tr><td>Complete logs for all build steps:</td><td><a href="%s">%s</a></td></tr>' % (master_status.getURLForThing(build), master_status.getURLForThing(build)) ) text.append(u'<tr><td>Build Reason:</td><td>%s</td></tr>' % build.getReason()) source = u"" ss = build.getSourceStamp() if ss.branch: source += u"[branch %s] " % ss.branch if ss.revision: source += ss.revision else: source += u"HEAD" if ss.patch: source += u" (plus patch)" text.append(u"<tr><td>Build Source Stamp:</td><td><b>%s</b></td></tr>" % source) text.append(u"<tr><td>Blamelist:</td><td>%s</td></tr>" % ",".join(build.getResponsibleUsers())) text.append(u'</table>') if ss.changes: text.append(u'<h4>Recent Changes:</h4>') for c in ss.changes: cd = c.asDict() when = datetime.datetime.fromtimestamp(cd['when'] ).ctime() text.append(u'<table cellspacing="10">') text.append(u'<tr><td>Repository:</td><td>%s</td></tr>' % cd['repository'] ) text.append(u'<tr><td>Project:</td><td>%s</td></tr>' % cd['project'] ) text.append(u'<tr><td>Time:</td><td>%s</td></tr>' % when) text.append(u'<tr><td>Changed by:</td><td>%s</td></tr>' % cd['who'] ) text.append(u'<tr><td>Comments:</td><td>%s</td></tr>' % cd['comments'] ) text.append(u'</table>') files = cd['files'] if files: text.append(u'<table cellspacing="10"><tr><th align="left">Files</th><th>URL</th></tr>') for file in files: text.append(u'<tr><td>%s:</td><td>%s</td></tr>' % (file['name'], file['url'])) text.append(u'</table>') text.append(u'<br>') # get log for last step logs = build.getLogs() # logs within a step are in reverse order. Search back until we find stdio for log in reversed(logs): if log.getName() == 'stdio': break name = "%s.%s" % (log.getStep().getName(), log.getName()) status, dummy = log.getStep().getResults() content = log.getText().splitlines() # Note: can be VERY LARGE url = u'%s/steps/%s/logs/%s' % (master_status.getURLForThing(build), log.getStep().getName(), log.getName()) text.append(u'<i>Detailed log of last build step:</i> <a href="%s">%s</a>' % (url, url)) text.append(u'<br>') text.append(u'<h4>Last %d lines of "%s"</h4>' % (limit_lines, name)) unilist = list() for line in content[len(content)-limit_lines:]: unilist.append(cgi.escape(unicode(line,'utf-8'))) text.append(u'<pre>'.join([uniline for uniline in unilist])) text.append(u'</pre>') text.append(u'<br><br>') text.append(u'<b>-The BuildBot</b>') return { 'body': u"\n".join(text), 'type': 'html' } mn = MailNotifier(fromaddr="buildbot@example.org", sendToInterestedUsers=False, mode='failing', extraRecipients=['listaddr@example.org'], messageFormatter=message_formatter)
fromaddr
sendToInterestedUsers
extraRecipients
subject
%(builder)s
will be replaced with the name of the builder which
provoked the message.
mode
all
change
failing
passing
problem
builders
categories
addLogs
addPatch
relayhost
smtpPort
useTls
True
(default is False
)
MailNotifier
sends emails using TLS and authenticates with the
relayhost
. When using TLS the arguments smtpUser
and
smtpPassword
must also be specified.
smtpUser
relayhost
.
smtpPassword
relayhost
.
lookup
IEmailLookup
). Object which provides
IEmailLookup, which is responsible for mapping User names (which come
from the VC system) into valid email addresses. If not provided, the
notifier will only be able to send mail to the addresses in the
extraRecipients list. Most of the time you can use a simple Domain
instance. As a shortcut, you can pass as string: this will be treated
as if you had provided Domain(str). For example,
lookup='twistedmatrix.com' will allow mail to be sent to all
developers whose SVN usernames match their twistedmatrix.com account
names. See buildbot/status/mail.py for more details.
messageFormatter
messageFormatter
function takes the mail mode (mode
), builder
name (name
), the build status (build
), the result code
(results
), and the BuildMaster status (master_status
). It
returns a dictionary. The body
key gives a string that is the complete
text of the message. The type
key is the message type ('plain' or
'html'). The 'html' type should be used when generating an HTML message. The
subject
key is optional, but gives the subject for the email.
extraHeaders
As a help to those writing messageFormatter
functions, the following
table describes how to get some useful pieces of information from the various
status objects:
name
master_status.getProjectName()
mode
(one of all, failing, problem, change, passing
)
from buildbot.status.builder import Results result_str = Results[results] # one of 'success', 'warnings', 'failure', 'skipped', or 'exception'
master_status.getURLForThing(build)
master_status.getBuildbotURL()
build.getText()
build.getProperties()
(a Properties
instance)
build.getSlavename()
build.getReason()
build.getResponsibleUsers()
ss = build.getSourceStamp() if ss: branch = ss.branch revision = ss.revision patch = ss.patch changes = ss.changes # list
A change object has the following useful information:
who
revision
branch
when
files
comments
Change
methods asText and asDict can be used to format the
information above. asText returns a list of strings and asDict returns
a dictonary suitable for html/mail rendering.
logs = list() for log in build.getLogs(): log_name = "%s.%s" % (log.getStep().getName(), log.getName()) log_status, dummy = log.getStep().getResults() log_body = log.getText().splitlines() # Note: can be VERY LARGE log_url = '%s/steps/%s/logs/%s' % (master_status.getURLForThing(build), log.getStep().getName(), log.getName()) logs.append((log_name, log_url, log_body, log_status))
The buildbot.status.words.IRC
status target creates an IRC bot
which will attach to certain channels and be available for status
queries. It can also be asked to announce builds as they occur, or be
told to shut up.
from buildbot.status import words irc = words.IRC("irc.example.org", "botnickname", channels=["channel1", "channel2"], password="mysecretpassword", notify_events={ 'exception': 1, 'successToFailure': 1, 'failureToSuccess': 1, }) c['status'].append(irc)
Take a look at the docstring for words.IRC
for more details on
configuring this service. Note that the useSSL
option requires
PyOpenSSL
. The password
argument, if provided, will be sent to
Nickserv to claim the nickname: some IRC servers will not allow clients to send
private messages until they have logged in with a password.
To use the service, you address messages at the buildbot, either
normally (botnickname: status
) or with private messages
(/msg botnickname status
). The buildbot will respond in kind.
Some of the commands currently available:
list builders
status BUILDER
status all
watch BUILDER
last BUILDER
notify on|off|list EVENT
started
finished
success
failure
exception
xToY
help COMMAND
help commands
to get a list of known
commands.
source
version
Additionally, the config file may specify default notification options as shown in the example earlier.
If the allowForce=True
option was used, some addtional commands
will be available:
force build BUILDER REASON
stop build BUILDER REASON
import buildbot.status.client pbl = buildbot.status.client.PBListener(port=int, user=str, passwd=str) c['status'].append(pbl)
This sets up a PB listener on the given TCP port, to which a PB-based
status client can connect and retrieve status information.
buildbot statusgui
(see statusgui) is an example of such a
status client. The port
argument can also be a strports
specification string.
def Process(self): print str(self.queue.popChunk()) self.queueNextServerPush() import buildbot.status.status_push sp = buildbot.status.status_push.StatusPush(serverPushCb=Process, bufferDelay=0.5, retryDelay=5) c['status'].append(sp)
StatusPush batches events normally processed and sends it to the serverPushCb callback every bufferDelay seconds. The callback should pop items from the queue and then queue the next callback. If no items were poped from self.queue, retryDelay seconds will be waited instead.
import buildbot.status.status_push sp = buildbot.status.status_push.HttpStatusPush( serverUrl="http://example.com/submit") c['status'].append(sp)
HttpStatusPush builds on StatusPush and sends HTTP requests to serverUrl, with all the items json-encoded. It is useful to create a status front end outside of buildbot for better scalability.
TODO: this needs a lot more examples
Each status plugin is an object which provides the
twisted.application.service.IService
interface, which creates a
tree of Services with the buildmaster at the top [not strictly true].
The status plugins are all children of an object which implements
buildbot.interfaces.IStatus
, the main status object. From this
object, the plugin can retrieve anything it wants about current and
past builds. It can also subscribe to hear about new and upcoming
builds.
Status plugins which only react to human queries (like the Waterfall display) never need to subscribe to anything: they are idle until someone asks a question, then wake up and extract the information they need to answer it, then they go back to sleep. Plugins which need to act spontaneously when builds complete (like the MailNotifier plugin) need to subscribe to hear about new builds.
If the status plugin needs to run network services (like the HTTP
server used by the Waterfall plugin), they can be attached as Service
children of the plugin itself, using the IServiceCollection
interface.
buildbot.status.client.PBListener
: PBListenerbuildbot.status.mail.MailNotifier
: MailNotifierbuildbot.status.status_push.HttpStatusPush
: HttpStatusPushbuildbot.status.status_push.StatusPush
: StatusPushbuildbot.status.web.baseweb.WebStatus
: WebStatusbuildbot.status.words.IRC
: IRC Botc['buildbotURL']
: Project Definitionsc['buildCacheSize']
: Data Lifetimec['builders']
: Builder Configurationc['buildHorizon']
: Data Lifetimec['change_source']
: Configuring Change Sourcesc['changeCacheSize']
: Data Lifetimec['changeHorizon']
: Data Lifetimec['debugPassword']
: Debug Optionsc['eventHorizon']
: Data Lifetimec['logCompressionLimit']
: Log Handlingc['logCompressionMethod']
: Log Handlingc['logHorizon']
: Data Lifetimec['logMaxSize']
: Log Handlingc['logMaxTailSize']
: Log Handlingc['manhole']
: Debug Optionsc['mergeRequests']
: Merging BuildRequestsc['prioritizeBuilders']
: Prioritizing Buildersc['projectName']
: Project Definitionsc['projectURL']
: Project Definitionsc['properties']
: Defining Global Propertiesc['schedulers']
: Configuring Schedulersc['slavePortnum']
: Setting the PB Port for Slavesc['slaves']
: Buildslavesc['status']
: Status TargetsThis section describes command-line tools available after buildbot installation. Since version 0.8 the one-for-all buildbot command-line tool was divided into two parts namely buildbot and buildslave. The last one was separated from main command-line tool to minimize dependencies required for running a buildslave while leaving all other functions to buildbot tool.
Every command-line tool has a list of global options and a set of commands which have their own options. One can run these tools in the following way:
buildbot [global options] command [command options]
Global options are the same for both tools which perform the following actions:
--help
--verbose
--version
One can also get help on any command by specifying –help as a command option:
buildbot command --help
You can also use manual pages for buildbot and buildslave for quick reference on command-line options.
The buildbot command-line tool can be used to start or stop a buildmaster and to interact with a running buildmaster. Some of its subcommands are intended for buildmaster admins, while some are for developers who are editing the code that the buildbot is monitoring.
The following buildbot sub-commands are intended for buildmaster administrators:
This creates a new directory and populates it with files that allow it to be used as a buildmaster's base directory.
You will usually want to use the -r
option to create a relocatable
buildbot.tac
. This allows you to move the master directory without
editing this file.
buildbot create-master -r BASEDIR
This starts a buildmaster which was already created in the given base directory. The daemon is launched in the background, with events logged to a file named twistd.log.
buildbot start BASEDIR
This terminates the buildmaster running in the given directory.
buildbot stop BASEDIR
This sends a SIGHUP to the buildmaster running in the given directory, which causes it to re-read its master.cfg file.
buildbot sighup BASEDIR
These tools are provided for use by the developers who are working on the code that the buildbot is monitoring.
buildbot statuslog --master MASTERHOST:PORT
This command starts a simple text-based status client, one which just prints out a new line each time an event occurs on the buildmaster.
The --master option provides the location of the
buildbot.status.client.PBListener
status port, used to deliver
build information to realtime status clients. The option is always in
the form of a string, with hostname and port number separated by a
colon (HOSTNAME:PORTNUM
). Note that this port is not the
same as the slaveport (although a future version may allow the same
port number to be used for both purposes). If you get an error message
to the effect of “Failure: twisted.cred.error.UnauthorizedLogin:”,
this may indicate that you are connecting to the slaveport rather than
a PBListener
port.
The --master option can also be provided by the
masterstatus
name in .buildbot/options (see .buildbot config directory).
If you have set up a PBListener (see PBListener), you will be able
to monitor your Buildbot using a simple Gtk+ application invoked with
the buildbot statusgui
command:
buildbot statusgui --master MASTERHOST:PORT
This command starts a simple Gtk+-based status client, which contains a few
boxes for each Builder that change color as events occur. It uses the same
--master argument and masterstatus
option as the
buildbot statuslog command (see statuslog).
This lets a developer to ask the question “What would happen if I committed this patch right now?”. It runs the unit test suite (across multiple build platforms) on the developer's current code, allowing them to make sure they will not break the tree when they finally commit their changes.
The buildbot try command is meant to be run from within a developer's local tree, and starts by figuring out the base revision of that tree (what revision was current the last time the tree was updated), and a patch that can be applied to that revision of the tree to make it match the developer's copy. This (revision, patch) pair is then sent to the buildmaster, which runs a build with that SourceStamp. If you want, the tool will emit status messages as the builds run, and will not terminate until the first failure has been detected (or the last success).
There is an alternate form which accepts a pre-made patch file (typically the output of a command like 'svn diff'). This “–diff” form does not require a local tree to run from. See See try, concerning the “–diff” command option.
For this command to work, several pieces must be in place: the See Try Schedulers, as well as some client-side configuration.
The try command needs to be told how to connect to the
try scheduler, and must know which of the authentication
approaches described above is in use by the buildmaster. You specify
the approach by using --connect=ssh or --connect=pb
(or try_connect = 'ssh'
or try_connect = 'pb'
in
.buildbot/options).
For the PB approach, the command must be given a --master
argument (in the form HOST:PORT) that points to TCP port that you
picked in the Try_Userpass
scheduler. It also takes a
--username and --passwd pair of arguments that match
one of the entries in the buildmaster's userpass
list. These
arguments can also be provided as try_master
,
try_username
, and try_password
entries in the
.buildbot/options file.
For the SSH approach, the command must be given --tryhost,
--username, and optionally --password (TODO:
really?) to get to the buildmaster host. It must also be given
--trydir, which points to the inlet directory configured
above. The trydir can be relative to the user's home directory, but
most of the time you will use an explicit path like
~buildbot/project/trydir. These arguments can be provided in
.buildbot/options as try_host
, try_username
,
try_password
, and try_dir
.
In addition, the SSH approach needs to connect to a PBListener status
port, so it can retrieve and report the results of the build (the PB
approach uses the existing connection to retrieve status information,
so this step is not necessary). This requires a --master
argument, or a masterstatus
entry in .buildbot/options,
in the form of a HOSTNAME:PORT string.
A trial build is performed on multiple Builders at the same time, and
the developer gets to choose which Builders are used (limited to a set
selected by the buildmaster admin with the TryScheduler's
builderNames=
argument). The set you choose will depend upon
what your goals are: if you are concerned about cross-platform
compatibility, you should use multiple Builders, one from each
platform of interest. You might use just one builder if that platform
has libraries or other facilities that allow better test coverage than
what you can accomplish on your own machine, or faster test runs.
The set of Builders to use can be specified with multiple
--builder arguments on the command line. It can also be
specified with a single try_builders
option in
.buildbot/options that uses a list of strings to specify all
the Builder names:
try_builders = ["full-OSX", "full-win32", "full-linux"]
If you are using the PB approach, you can get the names of the builders
that are configured for the try scheduler using the get-builder-names
argument:
buildbot try --get-builder-names --connect=pb --master=... --username=... --passwd=...
The try command also needs to know how to take the
developer's current tree and extract the (revision, patch)
source-stamp pair. Each VC system uses a different process, so you
start by telling the try command which VC system you are
using, with an argument like --vc=cvs or --vc=git.
This can also be provided as try_vc
in
.buildbot/options.
The following names are recognized: cvs
svn
bzr
hg
darcs
git
p4
Some VC systems (notably CVS and SVN) track each directory
more-or-less independently, which means the try command
needs to move up to the top of the project tree before it will be able
to construct a proper full-tree patch. To accomplish this, the
try command will crawl up through the parent directories
until it finds a marker file. The default name for this marker file is
.buildbot-top, so when you are using CVS or SVN you should
touch .buildbot-top
from the top of your tree before running
buildbot try. Alternatively, you can use a filename like
ChangeLog or README, since many projects put one of
these files in their top-most directory (and nowhere else). To set
this filename, use --try-topfile=ChangeLog, or set it in the
options file with try_topfile = 'ChangeLog'
.
You can also manually set the top of the tree with
--try-topdir=~/trees/mytree, or try_topdir =
'~/trees/mytree'
. If you use try_topdir
, in a
.buildbot/options file, you will need a separate options file
for each tree you use, so it may be more convenient to use the
try_topfile
approach instead.
Other VC systems which work on full projects instead of individual directories (darcs, mercurial, git) do not require try to know the top directory, so the --try-topfile and --try-topdir arguments will be ignored.
If the try command cannot find the top directory, it will abort with an error message.
Some VC systems record the branch information in a way that “try” can locate it. For the others, if you are using something other than the default branch, you will have to tell the buildbot which branch your tree is using. You can do this with either the --branch argument, or a try_branch entry in the .buildbot/options file.
Each VC system has a separate approach for determining the tree's base revision and computing a patch.
CVS
-D
time specification, uses it as the base
revision, and computes the diff between the upstream tree as of that
point in time versus the current contents. This works, more or less,
but requires that the local clock be in reasonably good sync with the
repository.
SVN
svn status -u
to find the latest
repository revision number (emitted on the last line in the “Status
against revision: NN” message). It then performs an svn diff
-rNN
to find out how your tree differs from the repository version,
and sends the resulting patch to the buildmaster. If your tree is not
up to date, this will result in the “try” tree being created with
the latest revision, then backwards patches applied to bring it
“back” to the version you actually checked out (plus your actual
code changes), but this will still result in the correct tree being
used for the build.
bzr
bzr revision-info
to find the base revision,
then a bzr diff -r$base..
to obtain the patch.
Mercurial
hg identify --debug
emits the full revision id (as opposed to
the common 12-char truncated) which is a SHA1 hash of the current
revision's contents. This is used as the base revision.
hg diff
then provides the patch relative to that
revision. For try to work, your working directory must only
have patches that are available from the same remotely-available
repository that the build process' source.Mercurial
will use.
Perforce
p4 changes -m1 ...
to determine the latest
changelist and implicitly assumes that the local tree is synched to this
revision. This is followed by a p4 diff -du
to obtain the patch.
A p4 patch differs sligtly from a normal diff. It contains full depot
paths and must be converted to paths relative to the branch top. To convert
the following restriction is imposed. The p4base (see see P4Source)
is assumed to be //depot
Darcs
darcs changes --context
to find the list
of all patches back to and including the last tag that was made. This text
file (plus the location of a repository that contains all these
patches) is sufficient to re-create the tree. Therefore the contents
of this “context” file are the revision stamp for a
Darcs-controlled source tree. It then does a darcs diff
-u
to compute the patch relative to that revision.
Git
git branch -v
lists all the branches available in the local
repository along with the revision ID it points to and a short summary
of the last commit. The line containing the currently checked out
branch begins with '* ' (star and space) while all the others start
with ' ' (two spaces). try scans for this line and extracts
the branch name and revision from it. Then it generates a diff against
the base revision.
If you provide the --wait option (or try_wait = True
in .buildbot/options), the buildbot try command will
wait until your changes have either been proven good or bad before
exiting. Unless you use the --quiet option (or
try_quiet=True
), it will emit a progress message every 60
seconds until the builds have completed.
Sometimes you might have a patch from someone else that you want to submit to the buildbot. For example, a user may have created a patch to fix some specific bug and sent it to you by email. You've inspected the patch and suspect that it might do the job (and have at least confirmed that it doesn't do anything evil). Now you want to test it out.
One approach would be to check out a new local tree, apply the patch, run your local tests, then use “buildbot try” to run the tests on other platforms. An alternate approach is to use the buildbot try --diff form to have the buildbot test the patch without using a local tree.
This form takes a --diff argument which points to a file that contains the patch you want to apply. By default this patch will be applied to the TRUNK revision, but if you give the optional --baserev argument, a tree of the given revision will be used as a starting point instead of TRUNK.
You can also use buildbot try --diff=- to read the patch from stdin.
Each patch has a “patchlevel” associated with it. This indicates the number of slashes (and preceding pathnames) that should be stripped before applying the diff. This exactly corresponds to the -p or --strip argument to the patch utility. By default buildbot try --diff uses a patchlevel of 0, but you can override this with the -p argument.
When you use --diff, you do not need to use any of the other options that relate to a local tree, specifically --vc, --try-topfile, or --try-topdir. These options will be ignored. Of course you must still specify how to get to the buildmaster (with --connect, --tryhost, etc).
These tools are generally used by buildmaster administrators.
This command is used to tell the buildmaster about source changes. It
is intended to be used from within a commit script, installed on the
VC server. It requires that you have a PBChangeSource
(see PBChangeSource) running in the buildmaster (by being set in
c['change_source']
).
buildbot sendchange --master MASTERHOST:PORT --username USER FILENAMES..
The master
and username
arguments can also be given in the
options file (see .buildbot config directory). There are other (optional)
arguments which can influence the Change
that gets submitted:
--branch
branch
) This provides the (string) branch specifier. If
omitted, it defaults to None, indicating the “default branch”. All files
included in this Change must be on the same branch.
--category
category
) This provides the (string) category specifier. If
omitted, it defaults to None, indicating “no category”. The category property
can be used by Schedulers to filter what changes they listen to.
--project
project
) This provides the (string) project to which this
change applies, and defaults to ”. The project can be used by schedulers to
decide which builders should respond to a particular change.
--repository
repository
) This provides the repository from which this
change came, and defaults to ”.
--revision
--revision_file
--property
--comments
--logfile
-
as the filename, the tool will read the
change comments from stdin.
buildbot debugclient --master MASTERHOST:PORT --passwd DEBUGPW
This launches a small Gtk+/Glade-based debug tool, connecting to the
buildmaster's “debug port”. This debug port shares the same port
number as the slaveport (see Setting the PB Port for Slaves), but the
debugPort
is only enabled if you set a debug password in the
buildmaster's config file (see Debug Options). The
--passwd option must match the c['debugPassword']
value.
--master can also be provided in .debug/options by the
master
key. --passwd can be provided by the
debugPassword
key. See .buildbot config directory.
The Connect
button must be pressed before any of the other
buttons will be active. This establishes the connection to the
buildmaster. The other sections of the tool are as follows:
Reload .cfg
Rebuild .py
poke IRC
words.IRC
status target and causes it to emit a
message on all the channels to which it is currently connected. This
was used to debug a problem in which the buildmaster lost the
connection to the IRC server and did not attempt to reconnect.
Commit
Force Build
Currently
Many of the buildbot tools must be told how to contact the
buildmaster that they interact with. This specification can be
provided as a command-line argument, but most of the time it will be
easier to set them in an “options” file. The buildbot
command will look for a special directory named .buildbot,
starting from the current directory (where the command was run) and
crawling upwards, eventually looking in the user's home directory. It
will look for a file named options in this directory, and will
evaluate it as a python script, looking for certain names to be set.
You can just put simple name = 'value'
pairs in this file to
set the options.
For a description of the names used in this file, please see the documentation for the individual buildbot sub-commands. The following is a brief sample of what this file's contents could be.
# for status-reading tools masterstatus = 'buildbot.example.org:12345' # for 'sendchange' or the debug port master = 'buildbot.example.org:18990' debugPassword = 'eiv7Po'
Note carefully that the names in the options
file usually do not match
the command-line option name.
masterstatus
--master
for statuslog and statusgui, this
gives the location of the client.PBListener
status port.
master
--master
for debugclient and sendchange.
This option is used for two purposes. It is the location of the
debugPort
for debugclient and the location of the
pb.PBChangeSource
for sendchange. Generally these are the
same port.
debugPassword
--passwd
for debugclient.
XXX Must match the value of c['debugPassword']
, used to protect the
debug port, for the debugclient command.
username
--username
for the sendchange command.
branch
--branch
for the sendchange command.
category
--category
for the sendchange command.
try_connect
--connect
, this specifies how the try command should
deliver its request to the buildmaster. The currently accepted values are
“ssh” and “pb”.
try_builders
--builders
, specifies which builders should be used for
the try build.
try_vc
--vc
for try, this specifies the version control
system being used.
try_branch
--branch
, this indicates that the current tree is on a non-trunk branch.
try_topdir
try_topfile
try_topdir
, equivalent to --try-topdir
, to explicitly
indicate the top of your working tree, or try_topfile
, equivalent to
--try-topfile
to name a file that will only be found in that top-most
directory.
try_host
try_username
try_dir
try_connect
is “ssh”, the command will use try_host
for
--tryhost
, try_username
for --username
, and try_dir
for --trydir
. Apologies for the confusing presence and absence of
'try'.
try_username
try_password
try_master
try_connect
is “pb”, the command will pay attention to
try_username
for --username
, try_password
for
--passwd
, and try_master
for --master
.
try_wait
masterstatus
try_wait
and masterstatus
(equivalent to --wait
and
master
, respectively) are used to ask the try command to wait for
the requested build to complete.
buildslave command-line tool is used for buildslave management only and does not provide any additional functionality. One can create, start, stop and restart the buildslave.
This creates a new directory and populates it with files that let it be used as a buildslave's base directory. You must provide several arguments, which are used to create the initial buildbot.tac file.
The -r
option is advisable here, just like for create-master
.
buildslave create-slave -r BASEDIR MASTERHOST:PORT SLAVENAME PASSWORD
The create-slave options are described in See Buildslave Options.
This starts a buildslave which was already created in the given base directory. The daemon is launched in the background, with events logged to a file named twistd.log.
buildbot start BASEDIR
This terminates the daemon buildslave running in the given directory.
buildbot stop BASEDIR
The Buildbot home page is http://buildbot.net/.
For configuration questions and general discussion, please use the
buildbot-devel
mailing list. The subscription instructions and
archives are available at
http://lists.sourceforge.net/lists/listinfo/buildbot-devel
The #buildbot channel on Freenode's IRC servers hosts development discussion, and often folks are available to answer questions there, as well.
This chapter is the official repository for the collected wisdom of the Buildbot hackers.
It contains some sparse documentation of the inner workings of Buildbot, but of course, the final reference for that is the source itself.
More importantly, this chapter represents the official repository of all agreed-on patterns for use in Buildbot. In this case, the source is a terrible reference, because much of it is old and crusty. But we are trying to do things the new, better way, and those new, better ways are described here.
Buildbot uses Twisted's service hierarchy heavily. The hierarchy looks like this:
Objects from the 'status' configuration key are attached directly to the buildmaster. These classes should inherit from StatusReceiver or StatusReceiverMultiService and include an 'implements(IStatusReceiver)' stanza.
Several small utilities are available at the top-level buildbot.util
package. As always, see the API documentation for more information.
natualSort
winslave1
, winslave2
, ..
formatInterval
ComparableMixin
class Widget(FactoryProduct, ComparableMixin): compare_attrs = [ 'radius', 'thickness' ] # ...
Any attributes not in compare_attrs
will not be considered when
comparing objects. This is particularly useful in implementing buildbot's
reconfig logic, where a simple comparison between the new and existing objects
can determine whether the new object should replace the existing object.
safeTranslate
LRUCache
get
and
add
methods, and can also be accessed via dictionary syntax
(lru['id']
).
This package provides a few useful collection objects.
For compatibility, it provides a clone of the Python
collections.defaultdict
for use in Python-2.4. In later versions, this
is simply a reference to the built-in defaultdict
, so buildbot code can
simply use buildbot.util.collections.defaultdict
everywhere.
It also provides a KeyedSets
class that can represent any numbers of
sets, keyed by name (or anything hashable, really). The object is specially
tuned to contain many different keys over its lifetime without wasting memory.
See the docstring for more information.
This package provides a simple way to say "please do this later":
from buildbot.util.eventual import eventually def do_what_I_say(what, where): # ... eventually(do_what_I_say, "clean up", "your bedroom")
The package defines "later" as "next time the reactor has control", so this is
a good way to avoid long loops that block other activity in the reactor.
Callables given to eventually
are guaranteed to be called in the same
order as the calls to eventually
. Any errors from the callable are
logged, but will not affect other callables.
If you need a deferred that will fire "later", use fireEventually
. This
function returns a deferred that will not errback.
This package is just an import of the best available JSON module. Use it
instead of a more complex conditional import of simplejson
or
json
.
Buildbot stores its state in a database, using the Python DBAPI to access it.
A database is specified with the buildbot.db.dbspec.DBSpec
class, which
encapsulates all of the parameters necessary to create a DBAPI connection. The
DBSpec class can create either a single synchronous connection, or a twisted
adbapi connection pool.
Most uses of the database in Buildbot are done through the
buildbot.db.connector.DBConnector
class, which wraps the DBAPI to
provide purpose-specific functions.
The database schema is managed by a special class, described in the next section.
The SQL for the database schema is available in
buildbot/db/schema/tables.sql
. However, note that this file is not used
for new installations or upgrades of the Buildbot database.
Instead, the buildbot.db.schema.DBSchemaManager
handles this task. The
operation of this class centers around a linear sequence of database versions.
Versions start at 0, which is the old pickle-file format. The manager has
methods to query the version of the database, and the current version from the
source code. It also has an upgrade
method which will upgrade the
database to the latest version. This operation is currently irreversible.
There is no operation to "install" the latest schema. Instead, a fresh install of buildbot begins with an (empty) version-0 database, and upgrades to the current version. This trades a bit of efficiency at install time for assurances that the upgrade code is well-tested.
To make a change to the database schema, follow these steps:
CURRENT_VERSION
in buildbot/db/schema/manager.py
by
one. This is your new version number.
buildbot.db.schema.base.Updater
named Updater
.
The class must define the method upgrade
, which takes no arguments. It
should upgrade the database from the previous version to your version,
including incrementing the number in the VERSION
table, probably with an
UPDATE
query.
Consult the API documentation for the base class for information on the attributes that are available.
fill_basedir
to add some test data.
Add code to assertDatabaseOKEmpty
to check that your upgrade works on an
empty database.
Add code to assertDatabaseOKFull
to check that your upgrade works on a
database with pre-existing data. Do this even if your changes do not move any
data from the basedir.
Run the tests to find the bugs you introduced in step 2.
test_get_current_version
test in the
same file. Only do this after you've finished the previous step - a failure of
this test is a good reminder that testing isn't done yet.
The master currently stores each logfile in a single file, which may have a standard compression applied.
The format is a special case of the netstrings protocol - see http://cr.yp.to/proto/netstrings.txt. The text in each netstring consists of a one-digit channel identifier followed by the data from that channel.
The formatting is implemented in the LogFile class in
buildbot/status/builder.py, and in particular by the merge
method.
Buildbot uses Jinja2 to render its web interface. The authoritative source for this templating engine is its own documentation, of course, but a few notes are in order for those who are making only minor modifications.
Jinja directives are enclosed in {% .. %}
, and sometimes also have
dashes. These dashes strip whitespace in the output. For example:
{% for entry in entries %} <li>{{ entry }}</li> {% endfor %}
will produce output with too much whitespace:
<li>pigs</li> <li>cows</li>
But adding the dashes will collapse that whitespace completely:
{% for entry in entries -%} <li>{{ entry }}</li> {%- endfor %}
yields
<li>pigs</li><li>cows</li>
Whenever any part of the web framework wants to perform some action on the buildmaster, it should check the user's authorization first.
Always check authorization twice: once to decide whether to show the option to the user (link, button, form, whatever); and once before actually performing the action.
To check whether to display the option, you'll usually want to pass an authz object to the Jinja template in your HtmlResource subclass:
def content(self, req, cxt): # ... cxt['authz'] = self.getAuthz(req) template = ... return template.render(**cxt)
and then determine whether to advertise the action in th template:
{{ if authz.advertiseAction('myNewTrick') }} <form action="{{ myNewTrick_url }}"> ... {{ endif }}
Actions can optionally require authentication, so use needAuthForm
to
determine whether to require a 'username' and 'passwd' field in the generated
form. These fields are usually generated by the auth()
form:
{% if authz.needAuthForm('myNewTrick') %} {{ auth() }} {% endif %}
Once the POST request comes in, it's time to check authorization again. This usually looks something like
if not self.getAuthz(req).actionAllowed('myNewTrick', req, someExtraArg): return Redirect(path_to_authfail(req))
The someExtraArg
is optional (it's handled with *args
, so you can
have several if you want), and is given to the user's authorization function.
For example, a build-related action should pass the build status, so that the
user's authorization function could ensure that devs can only operate on their
own builds.
The available actions are listed in see WebStatus Configuration Parameters.
It's often necessary to pass passwords to commands on the slave, but it's no
fun to see those passwords appear for everyone else in the build log. The
Obfuscated
class can help here. Instantiate it with a real string and a
fake string that should appear in logfiles. You can then use the
Obfuscated.get_real
and Obfuscated.get_fake
static methods to
convert a list of command words to the real or fake equivalent.
The RunProcess
implementation in the buildslave will apply these methods
automatically, so just feed it a list of strings and Obfuscated
objects.
This section is a (very incomplete) description of the master-slave interface. The interface is based on Twisted's Perspective Broker.
The slave connects to the master, using the parameters supplied to
buildslave create-slave
. It uses a reconnecting process with an
exponential backoff, and will automatically reconnect on disconnection.
Once connected, the slave authenticates with the Twisted Cred (newcred)
mechanism, using the username and password supplied to buildslave
create-slave
. The "mind" is the slave bot instance (class
buildslave.bot.Bot
).
On the master side, the realm is implemented by
buildbot.master.Dispatcher
, which examines the username of incoming
avatar requests. There are special cases for change
, debug
, and
statusClient
, which are not discussed here. For all other usernames,
the botmaster is consulted, and if a slave with that name is configured, its
buildbot.buildslave.BuildSlave
instance is returned as the perspective.
At this point, the master-side BuildSlave object has a pointer to the remote,
slave-side Bot object in self.slave
, and the slave-side Bot object has a
reference to the master-side BuildSlave object in self.perspective
.
The slave-side object has the following remote methods:
remote_getCommands
(name, version)
for all commands the slave recognizes
remote_setBuilderList
This method returns a dictionary of SlaveBuilder
objects - see below
remote_print
remote_getSlaveInfo
remote_getVersion
The master-side object has the following method:
perspective_keepalive
Each build slave has a set of builders which can run on it. These are represented by distinct classes on the master and slave, just like the BuildSlave and Bot objects described above.
On the slave side, builders are represented as instances of the
buildslave.bot.SlaveBuilder
class. On the master side, they are
represented by the buildbot.process.builder.SlaveBuilder
class. The
following will refer to these as the slave-side and master-side SlaveBuilder
classes. Each object keeps a reference to its opposite in self.remote
.
remote_setMaster
remote_print
remote_startBuild
remote_startCommand
remote_interruptCommand
remote_shutdown
The master side does not have any remotely-callable methods.
After the initial connection and trading of a mind (Bot) for an avatar
(BuildSlave), the master calls the Bot's setBuilderList
method to set up
the proper slave builders on the slave side. This method returns a reference to
each of the new slave-side SlaveBuilder objects. Each of these is handed to the
corresponding master-side SlaveBuilder object. This immediately calls the remote
setMaster
method, then the print
method.
To ping a remote SlaveBuilder, the master calls the print
method.
When a build starts, the msater calls the slave's startBuild
method.
Each BuildStep instance will subsequently call the startCommand
method,
passing a reference to itself as the stepRef
parameter. The
startCommand
method returns immediately, and the end of the command is
signalled with a call to a method on the master-side BuildStep object.
remote_update
remote_complete
Updates from the slave are a list of individual update elements. Each update
element is, in turn, a list of the form [data, 0]
where the 0 is present
for historical reasons. The data is a dictionary, with keys describing the
contents, e.g., header
, stdout
, or the name of a logfile. If the
key is rc
, then the value is the exit status of the command. No further
updates should be sent after an rc
.
Twisted has some useful, but little-known classes. They are listed here with brief descriptions, but you should consult the API documentation or source code for the full details.
Calls an asynchronous function repeatedly at set intervals.
Similar to t.i.t.LoopingCall
, but implemented as a service that will
automatically start an dstop the function calls when the service is started and
stopped.
In general, we are trying to ensure that new tests are good. So what makes a good test?
Tests that depend on wall time will fail. As a bonus, they run very slowly. Do
not use reactor.callLater
to wait "long enough" for something to happen.
For testing things that themselves depend on time, consider using
twisted.internet.tasks.Clock
. This may mean passing a clock instance to
the code under test, and propagating that instance as necessary to ensure that
all of the code using callLater
uses it. Refactoring code for
testability is difficult, but wortwhile.
For testing things that do not depend on time, but for which you cannot detect the "end" of an operation: add a way to detect the end of the operation!
Make your tests readable. This is no place to skimp on comments! Others will
attempt to learn about the expected behavior of your class by reading the
tests. As a side note, if you use a Deferred
chain in your test, write
the callbacks as nested functions, rather than using object methods with funny
names:
def testSomething(self): d = doThisFirst() def andThisNext(res): pass # ... d.addCallback(andThisNext) return d
This isolates the entire test into one indented block. It is OK to add methods for common functionality, but give them real names and explain in detail what they do.
Your test module should be named after the package or class it tests, replacing
.
with _
and omitting the buildbot_. For example,
test_status_web_authz_Authz.py
tests the Authz class in
buildbot/status/web/authz.py
. Modules with only one class, or a few
trivial classes, can be tested in a single test module. For more complex
situations, prefer to use multiple test modules.
Test method names should follow the pattern test_METHOD_CONDITION where METHOD is the method being tested, and CONDITION is the condition under which it's tested. Since we can't always test a single method, this is not a hard-and-fast rule.
Each test should have a single assertion. This may require a little bit of work to get several related pieces of information into a single Python object for comparison. The problem with multiple assertions is that, if the first assertion fails, the remainder are not tested. The test results then do not tell the entire story.
If you need to make two unrelated assertions, you should be running two tests.
Mocks assert that they are called correctly. Stubs provide a predictable base on which to run the code under test. See Mock Object and Method Stub.
Mock objects can be constructed easily using the aptly-named mock module, which is a requirement for Buildbot's tests.
One of the difficulties with Buildbot is that interfaces are unstable and poorly documented, which makes it difficult to design stubs. A common repository for stubs, however, will allow any interface changes to be reflected in only one place in the test code.
The shorter each test is, the better. Test as little code as possible in each test.
It is fine, and in fact encouraged, to write the code under test in such a way as to facilitate this. As an illustrative example, if you are testing a new Step subclass, but your tests require instantiating a BuildMaster, you're probably doing something wrong! (Note that this rule is almost universally violated in the existing buildbot tests).
This also applies to test modules. Several short, easily-digested test modules are preferred over a 1000-line monster.
Each test should be maximally independent of other tests. Do not leave files laying around after your test has finished, and do not assume that some other test has run beforehand. It's fine to use caching techniques to avoid repeated, lengthy setup times.
Tests should be as robust as possible, which at a basic level means using the available frameworks correctly. All deferreds should have callbacks and be chained properly. Error conditions should be checked properly. Race conditions should not exist (see "Independent of Time", above).
Note that tests will pass most of the time, but the moment when they are most useful is when they fail.
When the test fails, it should produce output that is helpful to the person chasing it down. This is particularly important when the tests are run remotely, in which case the person chasing down the bug does not have access to the system on which the test fails. A test which fails sporadically with no more information than "AssertionFailed?" is a prime candidate for deletion if the error isn't obvious. Making the error obvious also includes adding comments describing the ways a test might fail.
Do not define setUp and tearDown directly in a mixin. This is the path to
madness. Instead, define a myMixinNameSetUp
and
myMixinNameTearDown
, and call them explicitlyi from the subclass's
setUp
and tearDown
. This makes it perfectly clear what is being
set up and torn down from a simple analysis of the test case.
Python does not allow assignment to anything but the innermost local scope or
the global scope with the global
keyword. This presents a problem when
creating nested functions:
def test_localVariable(self): cb_called = False def cb(): cb_called = True cb() self.assertTrue(cb_called) # will fail!
The cb_called = True
assigns to a different variable than
cb_called = False
. In production code, it's usually best to work around
such problems, but in tests this is often the clearest way to express the
behavior under test.
The solution is to change something in a common mutable object. While a simple
list can serve as such a mutable object, this leads to code that is hard to
read. Instead, use State
:
from buildbot.test.state import State def test_localVariable(self): state = State(cb_called=False) def cb(): state.cb_called = True cb() self.assertTrue(state.cb_called) # passes
This is almost as readable as the first example, but it actually works.
The module buildbot.test.util.monkeypatches
contains a few
monkey-patches to Twisted that detect errors a bit better. These patches
shouldn't affect correct behavior, so it's worthwhile including something like
this in the header of every test file:
from buildbot.test.util.monkeypatches import monkeypatch monkeypatch()
buildbot.buildslave.BuildSlave
: Master-Slave APIbuildbot.changes.manager.ChangeManager
: Buildmaster Service Hierarchybuildbot.db.connector.DBConnector
: The Databasebuildbot.db.dbspec.DBSpec
: The Databasebuildbot.db.schema.DBSchemaManager
: Database Schemabuildbot.master.BotMaster
: Buildmaster Service Hierarchybuildbot.master.BuildMaster
: Buildmaster Service Hierarchybuildbot.master.Dispatcher
: Master-Slave APIbuildbot.process.builder.Builder
: Master-Slave APIbuildbot.schedulers.manager.SchedulerManager
: Buildmaster Service Hierarchybuildbot.status.builder.LogFile
: Log File Formatbuildbot.test.util.monkeypatches
: Better Debugging through Monkeypatchingbuildslave.bot.Bot
: Master-Slave APIbuildslave.bot.SlaveBuilder
: Master-Slave APIbuildslave.util.Obfuscated
: Obfuscating Passwords[1] this @reboot syntax is understood by Vixie cron, which is the flavor usually provided with linux systems. Other unices may have a cron that doesn't understand @reboot
[2] except Darcs, but since the Buildbot never modifies its local source tree we can ignore the fact that Darcs uses a less centralized model
[3] many VC systems provide more complexity than this: in particular the local views that P4 and ClearCase can assemble out of various source directories are more complex than we're prepared to take advantage of here
[4] this checkoutDelay
defaults
to half the tree-stable timer, but it can be overridden with an
argument to the Source Step
[5] Build properties are serialized along with the build results, so they must be serializable. For this reason, the value of any build property should be simple inert data: strings, numbers, lists, tuples, and dictionaries. They should not contain class instances.
[6] framboozle.com is still available. Remember, I get 10% :).
[7] Framboozle gets very excited about running unit tests.
[8] See http://en.wikipedia.org/wiki/Read/write_lock_pattern for more information.
[9] Deadlock is the situation where two or more slaves each hold a lock in exclusive mode, and in addition want to claim the lock held by the other slave exclusively as well. Since locks allow at most one exclusive user, both slaves will wait forever.
[10] Starving is the situation that only a few locks are available, and they are immediately grabbed by another build. As a result, it may take a long time before all locks needed by the starved build are free at the same time.
[11] Apparently this is the same way http://buildd.debian.org displays build status
[12] It may even be possible to provide SSL access by using a
specification like "ssl:12345:privateKey=mykey.pen:certKey=cert.pem"
,
but this is completely untested