This is the BuildBot manual.
Copyright (C) 2005 Brian Warner
Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved.
--- The Detailed Node Listing ---
Introduction
Installation
Troubleshooting
Concepts
Version Control Systems
Users
Configuration
Getting Source Code Changes
Change Sources
Build Process
Build Steps
Simple ShellCommand Subclasses
Source Checkout
Build Factories
BuildFactory
Process-Specific build factories
Status Delivery
The BuildBot is a system to automate the compile/test cycle required by most software projects to validate code changes. By automatically rebuilding and testing the tree each time something has changed, build problems are pinpointed quickly, before other developers are inconvenienced by the failure. The guilty developer can be identified and harassed without human intervention. By running the builds on a variety of platforms, developers who do not have the facilities to test their changes everywhere before checkin will at least know shortly afterwards whether they have broken the build or not. Warning counts, lint checks, image size, compile time, and other build parameters can be tracked over time, are more visible, and are therefore easier to improve.
The overall goal is to reduce tree breakage and provide a platform to run tests or code-quality checks that are too annoying or pedantic for any human to waste their time with. Developers get immediate (and potentially public) feedback about their changes, encouraging them to be more careful about testing before checkin.
Features:
The Buildbot was inspired by a similar project built for a development
team writing a cross-platform embedded system. The various components
of the project were supposed to compile and run on several flavors of
unix (linux, solaris, BSD), but individual developers had their own
preferences and tended to stick to a single platform. From time to
time, incompatibilities would sneak in (some unix platforms want to
use string.h
, some prefer strings.h
), and then the tree
would compile for some developers but not others. The buildbot was
written to automate the human process of walking into the office,
updating a tree, compiling (and discovering the breakage), finding the
developer at fault, and complaining to them about the problem they had
introduced. With multiple platforms it was difficult for developers to
do the right thing (compile their potential change on all platforms);
the buildbot offered a way to help.
Another problem was when programmers would change the behavior of a library without warning its users, or change internal aspects that other code was (unfortunately) depending upon. Adding unit tests to the codebase helps here: if an application's unit tests pass despite changes in the libraries it uses, you can have more confidence that the library changes haven't broken anything. Many developers complained that the unit tests were inconvenient or took too long to run: having the buildbot run them reduces the developer's workload to a minimum.
In general, having more visibility into the project is always good, and automation makes it easier for to do the right thing. When everyone can see the status of the project, developers are encouraged to keep the tree in good working order.
The current version of the Buildbot is additionally targeted at distributed free-software projects, where resources and platforms are only available when provided by interested volunteers. The buildslaves are designed to require an absolute minimum of configuration, reducing the effort a potential volunteer needs to expend to be able to contribute a new test environment to the project. The goal is for anyone who wishes that a given project would run on their favorite platform should be able to offer that project a buildslave, running on that platform, where they can verify that their portability code works, and keeps working.
The Buildbot consists of a single buildmaster
and one or more
buildslaves
, connected in a star topology. The buildmaster
makes all decisions about what and when to build. It sends commands to
be run on the build slaves, which simply execute the commands and
return the results. (certain steps involve more local decision making,
where the overhead of sending a lot of commands back and forth would
be inappropriate, but in general the buildmaster is responsible for
everything).
The buildmaster is usually fed Changes
by some sort of version
control system See Change Sources, which may cause builds to be
run. As the builds are performed, various status messages are
produced, which are then sent to any registered Status Targets
See Status Delivery.
The buildmaster is configured and maintained by the “buildmaster admin”, who is generally the project team member responsible for build process issues. Each buildslave is maintained by a “buildslave admin”, who do not need to be quite as involved. Generally slaves are run by anyone who has an interest in seeing the project work well on their platform.
A day in the life of the buildbot:
At a bare minimum, you'll need the following (for both the buildmaster and a buildslave):
Buildbot requires python-2.2 or later, and is primarily developed against python-2.3. The buildmaster uses generators, a feature which is not available in python-2.1, and both master and slave require a version of Twisted which only works with python-2.2 or later. Certain features (like the inclusion of build logs in status emails) require python-2.2.2 or later.
Both the buildmaster and the buildslaves require Twisted-1.3.0 or later. They have been briefly tested against Twisted-1.2.0, and might even work with Twisted-1.1.0, but 1.3.0 is the version that has received the most testing.
They run against Twisted-2.0.0 as well, albeit with a number of warnings about the use of deprecated features. If you use Twisted-2.0, you'll need at least "Twisted" (the core package), and you'll also want TwistedMail, TwistedWeb, and TwistedWords (for sending email, serving a web status page, and delivering build status via IRC, respectively).
Certain other packages may be useful on the system running the buildmaster:
If your buildmaster uses FreshCVSSource to receive change notification from a cvstoys daemon, it will require CVSToys be installed (tested with CVSToys-1.0.10). If the it doesn't use that source (i.e. if you only use a mail-parsing change source, or the SVN notification script), you will not need CVSToys.
The Buildbot is installed using the standard python distutils
module. After unpacking the tarball, the process is:
python setup.py build python setup.py install
where the install step may need to be done as root. This will put the
bulk of the code in somewhere like
/usr/lib/python2.3/site-packages/buildbot . It will also install the
buildbot
command-line tool in /usr/bin/buildbot.
To test this, shift to a different directory (like /tmp), and run:
pydoc buildbot
If it shows you a brief description of the package and its contents,
the install went ok. If it says no Python documentation found
for 'buildbot'
, then something went wrong.
Windows users will find these files in other places. You will need to
make sure that python can find the libraries, and will probably find
it convenient to have buildbot
on your PATH.
If you wish, you can run the buildbot unit test suite like this:
PYTHONPATH=. trial -v buildbot.test
This should run up to 109 tests, depending upon what VC tools you have installed. On my desktop machine it takes about two minutes to complete. Nothing should fail, a few might be skipped. If any of the tests fail, you should stop and investigate the cause before continuing the installation process, as it will probably be easier to track down the bug early.
If you want to test the VC checkout process, you'll need to install a tarball of repositories, available from http://buildbot.sf.net/ . Otherwise there are about 8 tests which will be skipped (all with names like testSVN and testArchHTTP). If you unpack this tarball in ~/tmp, it will create ~/tmp/buildbot-test-vc-1, and you can enable the extra tests with:
PYTHONPATH=. BUILDBOT_TEST_VC=~/tmp trial -v buildbot.test
If you cannot or do not wish to install the buildbot into a site-wide location like /usr or /usr/local, you can also install it into the account's home directory. Do the install command like this:
python setup.py install --home=~
That will populate ~/lib/python and create
~/bin/buildbot. Make sure this lib directory is on your
PYTHONPATH
.
As you learned earlier (see System Architecture), the buildmaster
runs on a central host (usually one that is publically visible, so
everybody can check on the status of the project), and controls all
aspects of the buildbot system. Let us call this host
buildbot.example.org
.
You may wish to create a separate user account for the buildmaster,
perhaps named buildmaster
. This can help keep your personal
configuration distinct from that of the buildmaster and is useful if
you have to use a mail-based notification system (see Change Sources). However, the Buildbot will work just fine with your regular
user account.
You need to choose a directory for the buildmaster, called the
basedir
. This directory will be owned by the buildmaster, which
will use configuration files therein, and create status files as it
runs. ~/Buildbot is a likely value. If you run multiple
buildmasters in the same account, or if you run both masters and
slaves, you may want a more distinctive name like
~/Buildbot/master/gnomovision or
~/Buildmasters/fooproject.
Once you've picked a directory, use the buildbot master command to create the directory and populate it with startup files:
buildbot master basedir
You will need to create a configuration file (see Configuration) before starting the buildmaster. Most of the rest of this manual is dedicated to explaining how to do this. A sample configuration file is placed in the working directory, named master.cfg.sample, which can be copied to master.cfg and edited to suit your purposes.
Internal details:
This command creates a file named buildbot.tac that contains
all the state necessary to create the buildmaster. Twisted has a tool
called twistd
which can use this .tac file to create and launch
a buildmaster instance. twistd takes care of logging and daemonization
(running the program in the background). /usr/bin/buildbot is a
front end which runs twistd for you.
In addition to buildbot.tac, a small Makefile.sample is installed. This can be used as the basis for customize daemon startup, See Launching the daemons.
Typically, you will be adding a buildslave to an existing buildmaster, to provide additional architecture coverage. The buildbot administrator will give you several pieces of information necessary to connect to the buildmaster. You should also be somewhat familiar with the project being tested, so you can troubleshoot build problems locally.
The buildbot exists to make sure that the project's stated “how to build it” process actually works. To this end, the buildslave should run in an environment just like that of your regular developers. Typically the project build process is documented somewhere (README, INSTALL, etc), in a document that should mention all library dependencies and contain a basic set of build instructions. This document will be useful as you configure the host and account in which the buildslave runs.
Here's a good checklist for setting up a buildslave:
It is recommended (although not mandatory) to set up a separate user
account for the buildslave. This account is frequently named
buildbot
or buildslave
. This serves to isolate your
personal working environment from that of the slave's, and helps to
minimize the security threat posed by letting possibly-unknown
contributors run arbitrary code on your system. The account should
have a minimum of fancy init scripts.
Follow the instructions given earlier (see Installing the code).
If you use a separate buildslave account, and you didn't install the
buildbot code to a shared location, then you will need to install it
with --home=~
for each account that needs it.
Make sure the host can actually reach the buildmaster. Usually the buildmaster is running a status webserver on the same machine, so simply point your web browser at it and see if you can get there. Install whatever additional packages or libraries the project's INSTALL document advises. (or not: if your buildslave is supposed to make sure that building without optional libraries still works, then don't install those libraries).
Again, these libraries don't necessarily have to be installed to a site-wide shared location, but they must be available to your build process. Accomplishing this is usually very specific to the build process, so installing them to /usr or /usr/local is usually the best approach.
Follow the instructions in the INSTALL document, in the buildslave's account. Perform a full CVS (or whatever) checkout, configure, make, run tests, etc. Confirm that the build works without manual fussing. If it doesn't work when you do it by hand, it will be unlikely to work when the buildbot attempts to do it in an automated fashion.
This should be somewhere in the buildslave's account, typically named after the project which is being tested. The buildslave will not touch any file outside of this directory. Something like ~/Buildbot or ~/Buildslaves/fooproject is appropriate.
When the buildbot admin configures the buildmaster to accept and use your buildslave, they will provide you with the following pieces of information:
Now run the 'buildbot' command as follows:
buildbot slave BASEDIR MASTERHOST:PORT SLAVENAME PASSWORD
This will create the base directory and a collection of files inside,
including the buildbot.tac file that contains all the
information you passed to the buildbot
command.
When it first connects, the buildslave will send a few files up to the buildmaster which describe the host that it is running on. These files are presented on the web status display so that developers have more information to reproduce any test failures that are witnessed by the buildbot. There are sample files in the info subdirectory of the buildbot's base directory. You should edit these to correctly describe you and your host.
BASEDIR/info/admin should contain your name and email address. This is the “buildslave admin address”, and will be visible from the build status page (so you may wish to munge it a bit if address-harvesting spambots are a concern).
BASEDIR/info/host should be filled with a brief description of the host: OS, version, memory size, CPU speed, versions of relevant libraries installed, and finally the version of the buildbot code which is running the buildslave.
If you run many buildslaves, you may want to create ~buildslave/info and share it among all the buildslaves with symlinks.
Both the buildmaster and the buildslave run as daemon programs. To
launch them, pass the working directory to the buildbot
command:
buildbot start BASEDIR
This command will start the daemon and then return, so normally it will not produce any output. To verify that the programs are indeed running, look for a pair of files named twistd.log and twistd.pid that should be created in the working directory. twistd.pid contains the process ID of the newly-spawned daemon.
When the buildslave connects to the buildmaster, new directories will start appearing in its base directory. The buildmaster tells the slave to create a directory for each Builder which will be using that slave. All build operations are performed within these directories: CVS checkouts, compiles, and tests.
Once you get everything running, you will want to arrange for the
buildbot daemons to be started at boot time. One way is to use
cron
, by putting them in a @reboot crontab entry:
@reboot buildbot BASEDIR
There is also experimental support for sysvinit-style /etc/init.d/buildbot startup scripts. debian/buildbot.init and debian/buildbot.default may be useful to look at.
It is important to remember that the environment provided to cron jobs
and init scripts can be quite different that your normal runtime.
There may be fewer environment variables specified, and the PATH may
be shorter than usual. It is a good idea to test out this method of
launching the buildslave by using a cron job with a time in the near
future, with the same command, and then check twistd.log to
make sure the slave actually started correctly. Common problems here
are for /usr/local or ~/bin to not be on your
PATH
, or for PYTHONPATH
to not be set correctly.
Sometimes HOME
is messed up too.
To modify the way the daemons are started (perhaps you want to set some environment variables first, or perform some cleanup each time), you can create a file named Makefile.buildbot in the base directory. When the buildbot front-end tool is told to start the daemon, and it sees this file (and /usr/bin/make exists), it will do make -f Makefile.buildbot start instead of its usual action (which involves running twistd). When the buildmaster or buildslave is installed, a Makefile.sample is created which implements the same behavior as the the buildbot tool uses, so if you want to customize the process, just copy Makefile.sample to Makefile.buildbot and edit it as necessary.
While a buildbot daemon runs, it emits text to a logfile, named
twistd.log. A command like tail -f twistd.log
is useful
to watch the command output as it runs.
The buildmaster will announce any errors with its configuration file in the logfile, so it is a good idea to look at the log at startup time to check for any problems. Most buildmaster activities will cause lines to be added to the log.
To stop a buildmaster or buildslave manually, use:
buildbot stop BASEDIR
This simply looks for the twistd.pid file and kills whatever process is identified within.
At system shutdown, all processes are sent a SIGKILL
. The
buildmaster and buildslave will respond to this by shutting down
normally.
The buildmaster will respond to a SIGHUP
by re-reading its
config file. The following shortcut is available:
buildbot sighup BASEDIR
It is a good idea to check the buildmaster's status page every once in a while, to see if your buildslave is still online. Eventually the buildbot will probably be enhanced to send you email (via the info/admin email address) when the slave has been offline for more than a few hours.
If you find you can no longer provide a buildslave to the project, please let the project admins know, so they can put out a call for a replacement.
The Buildbot records status and logs output continually, each time a
build is performed. The status tends to be small, but the build logs
can become quite large. Each build and log are recorded in a separate
file, arranged hierarchically under the buildmaster's base directory.
To prevent these files from growing without bound, you should
periodically delete old build logs. A simple cron job to delete
anything older than, say, two weeks should do the job. The only trick
is to leave the buildbot.tac and other support files alone, for
which find's -mindepth
argument helps skip everything in the
top directory. You can use something like the following:
@weekly cd BASEDIR && find . -mindepth 2 -type f -mtime +14 -exec rm {} \; @weekly cd BASEDIR && find twistd.log* -mtime +14 -exec rm {} \;
Here are a few hints on diagnosing common problems.
Cron jobs are typically run with a minimal shell (/bin/sh, not
/bin/bash), and tilde expansion is not always performed in such
commands. You may want to use explicit paths, because the PATH
is usually quite short and doesn't include anything set by your
shell's startup scripts (.profile, .bashrc, etc). If
you've installed buildbot (or other python libraries) to an unusual
location, you may need to add a PYTHONPATH
specification (note
that python will do tilde-expansion on PYTHONPATH
elements by
itself). Sometimes it is safer to fully-specify everything:
@reboot PYTHONPATH=~/lib/python /usr/local/bin/buildbot start /usr/home/buildbot/basedir
Take the time to get the @reboot job set up. Otherwise, things will work fine for a while, but the first power outage or system reboot you have will stop the buildslave with nothing but the cries of sorrowful developers to remind you that it has gone away.
If the buildslave cannot connect to the buildmaster, the reason should be described in the twistd.log logfile. Some common problems are an incorrect master hostname or port number, or a mistyped bot name or password. If the buildslave loses the connection to the master, it is supposed to attempt to reconnect with an exponentially-increasing backoff. Each attempt (and the time of the next attempt) will be logged. If you get impatient, just manually stop and re-start the buildslave.
When the buildmaster is restarted, all slaves will be disconnected,
and will attempt to reconnect as usual. The reconnect time will depend
upon how long the buildmaster is offline (i.e. how far up the
exponential backoff curve the slaves have travelled). Again,
buildbot stop
BASEDIR; buildbot start
BASEDIR will
speed up the process.
From the buildmaster's main status web page, you can force a build to
be run on your build slave. Figure out which column is for a builder
that runs on your slave, click on that builder's name, and the page
that comes up will have a “Force Build” button. Fill in the form,
hit the button, and a moment later you should see your slave's
twistd.log filling with commands being run. Using pstree
or top
should also reveal the cvs/make/gcc/etc processes being
run by the buildslave. Note that the same web page should also show
the admin and host information files that you configured
earlier.
This chapter defines some of the basic concepts that the Buildbot uses. You'll need to understand how the Buildbot sees the world to configure it properly.
These source trees come from a Version Control System of some kind.
CVS and Subversion are two popular ones, but the Buildbot supports
others. All VC systems1 have some notion of an upstream
repository
which acts as a server, from which clients can
obtain source trees according to various parameters. The VC repository
provides source trees of various projects, for different branches, and
from various points in time. The first thing we have to do is to
specify which source tree we want to get.
For the purposes of the Buildbot, we will try to generalize all VC systems as having repositories that each provide sources for a variety of projects. Each project is defined as a directory tree with source files. The individual files may each have revisions, but we ignore that and treat the project as a whole as having a set of revisions. Each time someone commits a change to the project, a new revision becomes available. These revisions can be described by a tuple with two items: the first is a branch tag, and the second is some kind of timestamp or revision stamp. Complex projects may have multiple branch tags, but there is always a default branch. The timestamp may be an actual timestamp (such as the -D option to CVS), or it may be a monotonically-increasing transaction number (such as the change number used by SVN and P4, or the revision number used by Arch, or a labeled tag used in CVS)2. The SHA1 revision ID used by Monotone is also a kind of revision stamp, in that it specifies a unique copy of the source tree.
When we aren't intending to make any changes to the sources we check out (at least not any that need to be committed back upstream), there are two basic ways to use a VC system:
Build personnel or CM staff typically use the first approach: the build that results is (ideally) completely specified by the two parameters given to the VC system: repository and revision tag. This gives QA and end-users something concrete to point at when reporting bugs.
Developers are more likely to use the second approach: each morning
the developer does an update to pull in the changes committed by the
team over the last day. These builds are not easy to fully specify: it
depends upon exactly when you did a checkout, and upon what local
changes the developer has in their tree. Developers do not normally
tag each build they produce, because there is usually significant
overhead involved in creating these tags. Recreating the trees used by
one of these builds can be a challenge. Some VC systems may provide
implicit tags (like a revision number), while others may allow the use
of timestamps to mean the state of the tree at time X
as
opposed to a tree-state that has been explicitly marked.
The Buildbot is designed to help developers, so it usually works in
terms of the latest
sources as opposed to specific tagged
revisions. However, it would really prefer to build from reproducible
source trees, so implicit revisions are used whenever possible.
So for the Buildbot's purposes we treat each VC system as a server which can take a list of specifications as input and produce a source tree as output. Some of these specifications are static: they are attributes of the builder and do not change over time. Others are more variable: each build will have a different value. The repository is changed over time by a sequence of Changes, each of which represents a single developer making changes to some set of files. These Changes are cumulative.
For normal builds, the Buildbot wants to get well-defined source trees
that contain specific Changes, and exclude others that occurred later.
We assume that the Changes arrive at the buildbot (through one of the
mechanisms described in see Change Sources) in the same order in
which they are committed to the repository. The Buildbot waits for the
tree to become stable
before initiating a build, for two
reasons. The first is that developers frequently commit several
changes in a group, even when the VC system provides ways to make
atomic transactions involving multiple files at the same time. Running
a build in the middle of these sets of changes would use an
inconsistent set of source files, and is likely to fail. The
tree-stable-timer is intended to avoid these useless builds that
include some of the developer's changes but not all. The second reason
is that some VC systems (i.e. CVS) do not provide repository-wide
transaction numbers, such that timestamps are the only way to refer to
a specific repository state. These timestamps may be somewhat
ambiguous, due to processing and notification delays. By waiting until
the tree has been stable for, say, 10 minutes, we can choose a
timestamp from the middle of that period to use for our source
checkout, and then be reasonably sure that any clock-skew errors will
not cause the build to be performed on an inconsistent set of source
files.
The Builders always use the tree-stable-timer, with a timeout that is configured to reflect a reasonable tradeoff between build latency and change frequency. When the VC system provides coherent repository-wide revision markers (such as Subversion's revision numbers, or in fact anything other than CVS's timestamps), the resulting Build is simply performed against a source tree defined by that revision marker. When the VC system does not provide this, a timestamp from the middle of the tree-stable period is used to generate the source tree3.
For CVS, the static specifications are repository
, module
,
and branch tag
(which defaults to HEAD). In addition to those, each
build uses a timestamp (or omits the timestamp to mean the latest
).
These parameters collectively specify a set of sources from which a build
may be performed.
Subversion combines the
repository, module, and branch into a single Subversion URL
parameter. Within that scope, source checkouts can be specified by a
numeric revision number
(a repository-wide
monotonically-increasing marker, such that each transaction that
changes the repository is indexed by a different revision number), or
a revision timestamp.
Arch specifies a repository by URL,
as well as a version
which is kind of like a branch name. Arch
uses the word archive
to represent the repository. Arch lets
you push changes from one archive to another, removing the strict
centralization required by CVS and SVN. It seems to retain the
distinction between repository and working directory that most other
VC systems use. For complex multi-module directory structures, Arch
has a built-in build config
layer with which the checkout
process has two steps. First, an initial bootstrap checkout is
performed to retrieve a set of build-config files. Second, one of
these files is used to figure out which archives/modules should be
used to populate subdirectories of the initial checkout.
Darcs doesn't really have the
notion of a single master repository. Nor does it really have
branches. In Darcs, each working directory is also a repository, and
there are operations to push and pull patches from one of these
repositories
to another. For the Buildbot's purposes, all you
need to do is specify the URL of a repository that you want to build
from. The build slave will then pull the latest patches from that
repository and build them.
Each Change has a who
attribute, which specifies which developer
is responsible for the change. This is a string which comes from a
namespace controlled by the VC repository. Frequently this means it is
a username on the host which runs the repository, but not all VC
systems require this (Arch, for example, would use a fully-qualified
Arch ID
, which looks like an email address). Each
StatusNotifier will map the who
attribute into something
appropriate for their particular means of communication: an email
address, an IRC handle, etc.
Each Change can have a revision
attribute, which describes how
to get a tree with a specific state: a tree which includes this Change
(and all that came before it) but none that come after it. If this
information is unavailable, the .revision
attribute will be
None
. These revisions are provided by the ChangeSource, and
consumed by the computeSourceRevision
method in the appropriate
step.Source
class.
revision
is an int, seconds since the epoch
revision
is an int, a transation number (r%d)
revision
revision
is a string, ending in –patch-%d
revision
is an int, the transaction number
The Change might also have a branch
attribute. This is
primarily intended to represent the CVS named branch (since CVS does
not embed the branch
in the pathname like many of the other
systems), however it could be used for other purposes as well (e.g.
some VC systems might allow commits to be marked as cosmetic
,
or docs-only, or something). The Build, in its
isBranchImportant
method, gets to decide whether the branch is
important or not. This allows you to configure Builds which only fire
on changes to a specific branch. For a change to trigger a build, both
the branch
must be important, and at least one of the files
inside the change must be considered important.
What is a Change?
Buildbot has a somewhat limited awareness of users. It assumes the world consists of a set of developers, each of whom can be described by a couple of simple attributes. These developers make changes to the source code, causing builds which may succeed or fail.
Each developer is primarily known through the source control system. Each
Change object that arrives is tagged with a who
field that
typically gives the account name (on the repository machine) of the user
responsible for that change. This string is the primary key by which the
User is known, and is displayed on the HTML status pages and in each Build's
“blamelist”.
To do more with the User than just refer to them, this username needs to be mapped into an address of some sort. The responsibility for this mapping is left up to the status module which needs the address. The core code knows nothing about email addresses or IRC nicknames, just user names.
Each Change has a single User who is responsible for that Change. Most Builds have a set of Changes: the Build represents the first time these Changes have been built and tested by the BuildBot. The build has a “blamelist” that consists of a simple union of the Users responsible for all the Build's Changes.
The Build provides (through the IBuildStatus interface) a list of Users who are “involved” in the build. For now this is equal to the blamelist, but in the future it will be expanded to include a “build sheriff” (a person who is “on duty” at that time and responsible for watching over all builds that occur during their shift), as well as per-module owners who simply want to keep watch over their domain (chosen by subdirectory or a regexp matched against the filenames pulled out of the Changes). The Involved Users are those who probably have an interest in the results of any given build.
In the future, Buildbot will acquire the concept of “Problems”, which last longer than builds and have beginnings and ends. For example, a test case which passed in one build and then failed in the next is a Problem. The Problem lasts until the test case starts passing again, at which point the Problem is said to be “resolved”.
If there appears to be a code change went into the tree at the same time as the test started failing, that Change is marked as being resposible for the Problem, and the user who made the change is added to the Problem's “Guilty” list. In addition to this user, there may be others who share responsibility for the Problem (module owners, sponsoring developers). In addition to the Responsible Users, there may be a set of Interested Users, who take an interest in the fate of the Problem.
Problems therefore have sets of Users who may want to be kept aware of the condition of the problem as it changes over time. If configured, the Buildbot can pester everyone on the Responsible list with increasing harshness until the problem is resolved, with the most harshness reserved for the Guilty parties themselves. The Interested Users may merely be told when the problem starts and stops, as they are not actually responsible for fixing anything.
The buildbot.status.mail.MailNotifier
class provides a
status target which can send email about the results of each build. It
accepts a static list of email addresses to which each message should be
delivered, but it can also be configured to send mail to the Build's
Interested Users. To do this, it needs a way to convert User names into
email addresses.
For many VC systems, the User Name is actually an account name on the system which hosts the repository. As such, turning the name into an email address is a simple matter of appending “@repositoryhost.com”. Some projects use other kinds of mappings (for example the preferred email address may be at “project.org” despite the repository having an unrelated hostname), and some VC systems have full separation between the concept of a user and that of an account on the repository host (like Perforce). Some systems (like Arch) put a full contact email address in every change.
To convert these names to addresses, the MailNotifier uses an EmailLookup
object. This provides a .getAddress method which accepts a name and
(eventually) returns an address. The default MailNotifier
module provides an EmailLookup which simply appends a static string,
configurable when the notifier is created. To create more complex behaviors
(perhaps using an LDAP lookup, or using “finger” on a central host to
determine a preferred address for the developer), provide a different object
as the lookup
argument.
In the future, when the Problem mechanism has been set up, the Buildbot will need to send mail to arbitrary Users. It will do this by locating a MailNotifier-like object among all the buildmaster's status targets, and asking it to send messages to various Users. This means the User-to-address mapping only has to be set up once, in your MailNotifier, and every email message the buildbot emits will take advantage of it.
Like MailNotifier, the buildbot.status.words.IRC
class
provides a status target which can announce the results of each build. It
also provides an interactive interface by responding to online queries
posted in the channel or sent as private messages.
The buildbot can be configured map User names to IRC nicknames, to watch
for the recent presence of these nicknames, and to deliver build status
messages to the interested parties. Like MailNotifier
does for
email addresses, the IRC
object has an IRCLookup
which is responsible for nicknames. The mapping can be set up statically, or
it can be updated by online users themselves (by claiming a username with
some kind of “buildbot: i am user warner” commands).
Once the mapping is established, the rest of the buildbot can ask the
IRC
object to send messages to various users. It can report on
the likelihood that the user saw the given message (based upon how long the
user has been inactive on the channel), which might prompt the Problem
Hassler logic to send them an email message instead.
The Buildbot also offers a PB-based status client interface which can display real-time build status in a GUI panel on the developer's desktop. This interface is normally anonymous, but it could be configured to let the buildmaster know which developer is using the status client. The status client could then be used as a message-delivery service, providing an alternative way to deliver low-latency high-interruption messages to the developer (like “hey, you broke the build”).
The buildbot's behavior is defined by the “config file”, which
normally lives in the master.cfg file in the buildmaster's base
directory (but this can be changed with an option to buildbot
master
). This file completely specifies which Builders are to be run,
which slaves they should use, how Changes should be tracked, and where
the status information is to be sent. The buildmaster's .tac file
names the base directory, everything else comes from the config file.
A sample config file was installed for you when you created the buildmaster, but you will need to edit it before your buildbot will do anything useful.
This chapter gives an overview of the format of this file and the various sections in it. You will need to read the later chapters to understand how to fill in each section properly.
The config file is, fundamentally, just a piece of Python code which defines a dictionary named BuildmasterConfig, with a number of keys that are treated specially. You don't need to know Python to do basic configuration, though, you can just copy the syntax of the sample file. If you are comfortable writing Python code, however, you can use all the power of a full programming language to achieve more complicated configurations.
The BuildmasterConfig
name is the only one which matters: all
other names defined during the execution of the file are discarded.
When parsing the config file, the Buildmaster generally compares the
old configuration with the new one and performs the minumum set of
actions necessary to bring the buildbot up to date: Builders which are
not changed are left untouched, and Builders which are modified get to
keep their old event history.
Basic Python syntax: comments start with a hash character (“#”),
tuples are defined with (parenthesis, pairs)
, arrays are
defined with [square, brackets]
, tuples and arrays are mostly
interchangeable. Dictionaries (data structures which map “keys” to
“values”) are defined with curly braces: {'key1': 'value1',
'key2': 'value2'}
. Function calls (and object instantiation) can use
named parameters, like w = html.Waterfall(http_port=8010)
.
The config file starts with a series of import
statements,
which make various kinds of Steps and Status targets available for
later use. The main BuildmasterConfig
dictionary is created,
then it is populated with a variety of keys. These keys are broken
roughly into the following sections, each of which is documented in
the rest of this chapter:
The config file can use a few names which are placed into its namespace:
basedir
os.path.expanduser(os.path.join(basedir, 'master.cfg'))
The config file is only read at specific points in time. It is first
read when the buildmaster is launched. Once it is running, there are
various ways to ask it to reload the config file. If you are on the
system hosting the buildmaster, you can send a SIGHUP
signal to
it: the buildbot tool has a shortcut for this:
buildbot sighup BASEDIR
The debug tool (buildbot debugclient --master HOST:PORT
) has a
“Reload .cfg” button which will also trigger a reload. In the
future, there will be other ways to accomplish this step (probably a
password-protected button on the web page, as well as a privileged IRC
command).
There are a couple of basic settings that you use to tell the buildbot what project it is working on. This information is used by status reporters to let users find out more about the codebase being exercised by this particular Buildbot installation.
c['projectName'] = "Buildbot" c['projectURL'] = "http://buildbot.sourceforge.net/" c['buildbotURL'] = "http://localhost:8010/"
projectName
is a short string will be used to describe the
project that this buildbot is working on. For example, it is used as
the title of the waterfall HTML page.
projectURL
is a string that gives a URL for the project as a
whole. HTML status displays will show projectName
as a link to
projectURL
, to provide a link from buildbot HTML pages to your
project's home page.
The buildbotURL
string should point to the location where the
buildbot's internal web server (usually the html.Waterfall
page) is visible. This typically uses the port number set when you
create the Waterfall
object, the buildbot needs your help to
figure out a suitable externally-visible host name.
When status notices are sent to users (either by email or over IRC),
buildbotURL
will be used to create a URL to the specific build
or problem that they are being notified about. It will also be made
available to queriers (over IRC) who want to find out where to get
more information about this buildbot.
The c['sources']
key is a list of ChangeSource
instances4.
This defines how the buildmaster learns about source code changes.
More information about what goes here is available in See Getting Source Code Changes.
c['sources'] = [buildbot.changes.pb.PBChangeSource()]
The buildmaster will listen on a TCP port of your choosing for connections from buildslaves. It can also use this port for connections from remote Change Sources, status clients, and debug tools. This port should be visible to the outside world, and you'll need to tell your buildslave admins about your choice.
It does not matter which port you pick, as long it is externally visible, however you should probably use something larger than 1024, since most operating systems don't allow non-root processes to bind to low-numbered ports.
c['slavePortnum'] = 10000
The c['bots']
key is a list of known buildslaves. Each
buildslave is defined by a tuple of (slavename, slavepassword). These
are the same two values that need to be provided to the buildslave
administrator when they create the buildslave.
c['bots'] = [('bot-solaris', 'solarispasswd'), ('bot-bsd', 'bsdpasswd'), ]
The slavenames must be unique, of course. The password exists to prevent evildoers from interfering with the buildbot by inserting their own (broken) buildslaves into the system.
Buildslaves with an unrecognized slavename or a non-matching password will be rejected when they attempt to connect.
The c['builders']
key is a list of dictionaries which specify
the Builders. The Buildmaster runs a collection of Builders, each of
which handles a single type of build (e.g. full versus quick), on a
single build slave. A Buildbot which makes sure that the latest code
(“HEAD”) compiles correctly across four separate architecture will
have four Builders, each performing the same build but on different
slaves (one per platform).
Each Builder gets a separate column in the waterfall display. In general, each Builder runs independently (although various kinds of interlocks can cause one Builder to have an effect on another).
Each Builder specification dictionary has several required keys:
name
slavename
slavename
must appear in the c['bots']
list. Each
buildslave can accomodate multiple Builders.
builddir
factory
buildbot.process.factory.BuildFactory
instance which
controls both when the build is run, and how it is performed. Full
details appear in their own chapter, See Build Process. Parameters
like the location of the CVS repository and the compile-time options
used for the build are generally provided as arguments to the
factory's constructor.
Other optional keys may be set on each Builder:
periodicBuildTime
periodicBuildTime
seconds.
This may be useful when you first get started using the buildbot,
before you have a Change Source configured. E.g., if you want a
buildbot which just recompiles the tree every hour, set this to 60*60.
category
c['interlocks']
is a list of Interlock specifications. Each one
is a 3-tuple of interlock-name, feeder-list, and watcher-list. The
interlock name is a unique string which distinguishes one interlock
from another. The feeder-list is a list of strings which name the
Builders that this interlock depends upon: the interlock will not
<q>open</q> for any given Change until all those Builders have
completed successfully. The watcher-list is a list of strings which
name the Builders that wait for this interlock to open. Those Builders
will not run until their Interlocks' feeding Builders have passed.
This feature is scheduled to be decomposed into a more useful pair:
Dependency
and Lock
. The first indicates that one build
depends upon another completing correctly, while the second indicates
two builds that may not be run at the same time.
Lock
are useful when running multiple Builders on the same
(slow) buildslave, where running the builds in parallel would cause
thrashing.
The Buildmaster has a variety of ways to present build status to
various users. Each such delivery method is a “Status Target” object
in the configuration's status
list. To add status targets, you
just append more objects to this list:
c['status'] = [] from buildbot.status import html c['status'].append(html.Waterfall(http_port=8010)) from buildbot.status import mail m = mail.MailNotifier(fromaddr="buildbot@localhost", extraRecipients=["builds@lists.example.com"], sendToInterestedUsers=False) c['status'].append(m) from buildbot.status import words c['status'].append(words.IRC(host="irc.example.com", nick="bb", channels=["#example"]))
Status delivery has its own chapter, See Status Delivery, in which all the built-in status targets are documented.
If you set c['debugPassword']
, then you can connect to the
buildmaster with the diagnostic tool launched by buildbot
debugclient MASTER:PORT
. From this tool, you can reload the config
file, manually force builds, and inject changes, which may be useful
for testing your buildmaster without actually commiting changes to
your repository (or before you have the Change Sources set up). The
debug tool uses the same port number as the slaves do:
c['slavePortnum']
, and is authenticated with this password.
c['debugPassword'] = "debugpassword"
If you set c['manhole']
to an instance of the
buildbot.master.Manhole
class, you can telnet into the
buildmaster and get an interactive Python shell, which may be useful
for debugging buildbot internals. It is probably only useful for
buildbot developers.
from buildbot.master import Manhole c['manhole'] = Manhole(9999, "admin", "password")
The most common way to use the Buildbot is centered around the idea of
Source Trees
: a directory tree filled with source code of some form
which can be compiled and/or tested. Some projects use languages that don't
involve any compilation step: nevertheless there may be a build
phase
where files are copied or rearranged into a form that is suitable for
installation. Some projects do not have unit tests, and the Buildbot is
merely helping to make sure that the sources can compile correctly. But in
all of these cases, the thing-being-tested is a single source tree.
A Version Control System mantains a source tree, and tells the buildmaster when it changes. The first step of each Build is typically to acquire a copy of some version of this tree.
This chapter describes how the Buildbot thinks about source trees and Changes, and how it learns about them.
Each Buildmaster watches a single source tree. Changes can be provided by a variety of ChangeSource types, however any given project will typically have only a single ChangeSource active. This section provides a list of ChangeSource types and descriptions of how to set them up.
Each source tree has a nominal top
. Each Change has a list of
filenames, which are all relative to this top location. The
ChangeSource is responsible for doing whatever is necessary to
accomplish this. This generally involves a Prefix
: a partial
pathname which is stripped from the front of all filenames provided to
the ChangeSource. Files which are outside this sub-tree are ignored by
the ChangeSource: it does not generate Changes for those files.
The master.cfg
configuration file has a dictionary key named
BuildmasterConfig['sources']
, which holds a list of
IChangeSource
objects. The config file will typically create an
object from one of the classes described below and stuff it into the
list.
s = FreshCVSSourceNewcred(host="host", port=4519, user="alice", passwd="secret", prefix="Twisted") BuildmasterConfig['sources'] = [s]
The CVSToys package provides a
server which runs on the machine that hosts the CVS repository it
watches. It has a variety of ways to distribute commit notifications,
and offers a flexible regexp-based way to filter out uninteresting
changes. One of the notification options is named PBService
and
works by listening on a TCP port for clients. These clients subscribe
to hear about commit notifications.
The buildmaster has a PBService
client built in. There are two
versions of it, one for old versions of CVSToys (1.0.9 and earlier)
which used the oldcred
authentication framework, and one for
newer versions (1.0.10 and later) which use newcred
. Both are
classes in the buildbot.changes.freshcvs
package.
FreshCVSSourceNewcred
objects are created with the following
parameters:
host
and port
user
and passwd
freshcvs
). These must match the server's values, which are
defined in the freshCfg
configuration file (which lives in the
CVSROOT directory of the repository).
prefix
CVSToys also provides a MailNotification
action which will send
email to a list of recipients for each commit. This tends to work
better than using /bin/mail
from within the CVSROOT/loginfo
file directly, as CVSToys will batch together all files changed during
the same CVS invocation, and can provide more information (like
creating a ViewCVS URL for each file changed).
The Buildbot's FCMaildirSource
is a ChangeSource which knows
how to parse these CVSToys messages and turn them into Change objects.
It watches a Maildir for new messages. The usually installation
process looks like:
projectname-commits
.
MailNotification
action to
send commit mail to this mailing list.
FCMaildirSource
to
watch the maildir for commit messages.
The FCMaildirSource
is created with two parameters: the
directory name of the maildir root, and the prefix to strip.
There are other types of maildir-watching ChangeSources, which only differ in the function used to parse the message body.
SyncmailMaildirSource
knows how to parse the message format
used in mail sent by Syncmail.
BonsaiMaildirSource
parses messages sent out by Bonsai.
The last kind of ChangeSource actually listens on a TCP port for
clients to connect and push change notices into the
Buildmaster. This is used by the Subversion notification tool (in
contrib/svn_buildbot.py), which is run by the SVN server and connects
to the buildmaster directly. This is also useful for creating new
kinds of change sources that work on a push
model instead of
some kind of subscription scheme, for example a script which is run
out of a .forward file.
This ChangeSource can be configured to listen on its own TCP port, or
it can share the port that the buildmaster is already using for the
buildslaves to connect. (This is possible because the
PBChangeSource
uses the same protocol as the buildslaves, and
they can be distinguished by the username
attribute used when
the initial connection is established). It might be useful to have it
listen on a different port it, for example, you wanted to establish
different firewall rules for that port. You could allow only the SVN
repository machine access to the PBChangeSource
port, while
allowing only the buildslave machines access to the slave port. Or you
could just expose one port and run everything over it. Note:
this feature is not yet implemented, the PBChangeSource will always
share the slave port and will always have a user
name of
change
, and a passwd of changepw
. These limitations will
be removed in the future..
The PBChangeSource
is created with the following
arguments:
port
None
(which is the default), it
shares the port used for buildslave connections. Not
Implemented, always set to None
.
user
and passwd
change
and changepw
.
user
is currently always set to change
,
passwd
is always set to changepw
.
prefix
This section lists all the standard BuildStep objects available for use in a Build, and the parameters which can be used to control each.
The standard Build (described in the <a href="factories.xhtml">BuildFactory</a> docs) runs a series of BuildSteps in order, only stopping if one of them requests that the build be halted. It collects status information from each one to create an overall build status (of SUCCESS, WARNINGS, or FAILURE).
All BuildSteps accept some common parameters to control how their individual status affects the overall build.
Arguments common to all BuildStep
subclasses:
name
haltOnFailure
flunkOnWarnings
flunkOnFailure
warnOnWarnings
warnOnFailure
This is a useful base class for just about everything you might want to do during a build (except for the initial source checkout). It runs a single command in a child shell on the build slave. All stdout/stderr is recorded into a LogFile. The step finishes with a status of FAILURE if the command's exit code is non-zero, otherwise it has a status of SUCCESS.
The preferred way to specify the command is with a list of argv strings, since this allows for spaces in filenames and avoids doing any fragile shell-escaping. You can also specify the command with a single string, in which case the string is given to '/bin/sh -c COMMAND' for parsing.
All ShellCommands are run by default in the “workdir”, which defaults to the build subdirectory of the slave builder's base directory. The absolute path of the workdir will thus be the slave's basedir (set as an option to mktap) plus the builder's basedir (set in the builder's ['builddir'] key in master.cfg) plus the workdir itself (a class-level attribute of the BuildFactory, defaults to build).
ShellCommand
Arguments:
command
env
want_stdout
want_stderr
want_stdout
but for stderr. Note that commands run through
a PTY do not have separate stdout/stderr streams: both are merged into
stdout.
timeout
Several subclasses of ShellCommand are provided as starting points for common build steps. These are all very simple: they just override a few parameters so you don't have to specify them yourself, making the master.cfg file less verbose.
This is intended to handle the ./configure
step from
autoconf-style projects, or the perl Makefile.PL
step from perl
MakeMaker.pm-style modules. The default command is ./configure
but you can change this by providing a command=
parameter.
This is meant to handle compiling or building a project written in C. The
default command is make all
. When the compile is finished, the
log file is scanned for GCC error/warning messages and a summary log is
created with any problems that were seen (TODO: the summary is not yet
created).
This is meant to handle unit tests. The default command is make
test
, and the warnOnFailure
flag is set.
The first step of any build is typically to acquire the source code from which the build will be performed. There are several classes to handle this, each for a different source control system. For a description of how Buildbot treats source control in general, take a look at the <a href="source.xhtml">Getting Sources</a> documentation.
All source checkout steps accept some common parameters to control how they get the sources and where they should be placed. The remaining per-system parameters are mostly to specify where exactly the sources are coming from.
mode
update
.
update
copy
clobber
export
clobber
, except that the 'cvs export' command is
used to create the working directory. This command removes all CVS
metadata files (the CVS/ directories) from the tree, which is
sometimes useful for creating source tarballs (to avoid including the
metadata in the tar file).
workdir
alwaysUseLatest
retry
(delay, repeats)
which means
that when a full VC checkout fails, it should be retried up to
repeats times, waiting delay seconds between attempts. If
you don't provide this, it defaults to None
, which means VC
operations should not be retried. This is provided to make life easier
for buildslaves which are stuck behind poor network connections.
My habit as a developer is to do a cvs update
and make
each
morning. Problems can occur, either because of bad code being checked in, or
by incomplete dependencies causing a partial rebuild to fail where a
complete from-scratch build might succeed. A quick Builder which emulates
this incremental-build behavior would use the mode='update'
setting.
On the other hand, other kinds of dependency problems can cause a clean build to fail where a partial build might succeed. This frequently results from a link step that depends upon an object file that was removed from a later version of the tree: in the partial tree, the object file is still around (even though the Makefiles no longer know how to create it).
“official” builds (traceable builds performed from a known set of
source revisions) are always done as clean builds, to make sure it is
not influenced by any uncontrolled factors (like leftover files from a
previous build). A “full” Builder which behaves this way would want
to use the mode='clobber'
setting.
The CVS
build step performs a CVS checkout or update. It
takes the following arguments:
cvsroot
:pserver:anonymous@cvs.sourceforge.net:/cvsroot/buildbot
cvsmodule
module
, which is generally a
subdirectory of the CVSROOT. The cvsmodule for the Buildbot source
code is buildbot
.
branch
-r
argument. This is most
useful for specifying a branch to work on. Defaults to HEAD
.
global_options
checkoutDelay
-D
option. Defaults to
half of the parent Build's treeStableTimer.
The SVN
build step performs a Subversion checkout or update.
It takes the following arguments.
svnurl
URL
argument that will be given
to the svn checkout
command. It dictates both where the
repository is located and which sub-tree should be extracted. In this
respect, it is like a combination of the CVS cvsroot
and
cvsmodule
arguments. For example, if you are using a remote
Subversion repository which is accessible through HTTP at a URL of
http://svn.example.com/repos
, and you wanted to check out the
trunk/calc
sub-tree, you would use
svnurl="http://svn.example.com/repos/trunk/calc"
as an argument
to your SVN
step.
The Darcs
build step performs a
Darcs checkout or update. It
takes the following arguments:
repourl
The Arch
build step performs an Arch checkout or update using the tla
client. It takes the
following arguments:
url
version
archive
Bazaar
step, below.
Bazaar
is an alternate implementation of the Arch VC system,
which uses a client named baz
. The checkout semantics are just
different enough from tla
that there is a separate BuildStep for
it.
It takes exactly the same arguments as Arch
, except that the
archive=
parameter is required. (baz does not emit the archive
name when you do baz register-archive
, so we must provide it
ourselves).
The P4Sync
build step performs a
Perforce update. It is a temporary
facility: a more complete P4 checkout step (named P4
) will
eventually replace it. This step requires significant manual setup on
each build slave. It takes the following arguments.
p4port
Each Builder is equipped with a <q>build factory</q>, which is
responsible for producing the actual Build
objects that perform
each build. This factory is created in the configuration file, and attached
to a Builder through the 'factory' element of its dictionary.
The standard BuildFactory
object creates Build
objects by default. These Builds will each execute a collection of
BuildSteps in a fixed sequence. Each step can affect the results of the
build, but in general there is little intelligence to tie the different
steps together. You can create subclasses of Build
to implement
more sophisticated build processes, and then use a subclass of
BuildFactory
to create instances of your new Build
subclass.
The steps used by these builds are all subclasses of BuildStep
.
The standard ones provided with Buildbot are documented later,
See Build Steps. You can also write your own subclasses to use in
builds.
The basic behavior for a BuildStep
is to:
More sophisticated steps may produce additional information and provide it to later build steps, or store it in the factory to provide to later builds.
The default BuildFactory
, provided in the
buildbot.process.factory
module, is constructed with a list of
“BuildStep specifications”: a list of (step_class, kwargs)
tuples for each. When asked to create a Build, it loads the list of
steps into the new Build object. When the Build is actually started,
these step specifications are used to create the actual set of
BuildSteps, which are then executed one at a time. For example, a
build which consists of a CVS checkout followed by a make build
would be constructed as follows:
from buildbot.process import step, factory from buildbot.factory import s # s is a convenience function, defined with: # def s(steptype, **kwargs): return (steptype, kwargs) f = factory.BuildFactory([s(step.CVS, cvsroot=CVSROOT, cvsmodule="project", mode="update"), s(step.Compile, command=["make", "build"])])
Each step can affect the build process in the following ways:
haltOnFailure
attribute is True, then a failure
in the step (i.e. if it completes with a result of FAILURE) will cause
the whole build to be terminated immediately: no further steps will be
executed. This is useful for setup steps upon which the rest of the
build depends: if the CVS checkout or ./configure
process
fails, there is no point in trying to compile or test the resulting
tree.
flunkOnFailure
or flunkOnWarnings
flag is set,
then a result of FAILURE or WARNINGS will mark the build as a whole as
FAILED. However, the remaining steps will still be executed. This is
appropriate for things like multiple testing steps: a failure in any
one of them will indicate that the build has failed, however it is
still useful to run them all to completion.
warnOnFailure
or warnOnWarnings
flag
is set, then a result of FAILURE or WARNINGS will mark the build as
having WARNINGS, and the remaining steps will still be executed. This
may be appropriate for certain kinds of optional build or test steps.
For example, a failure experienced while building documentation files
should be made visible with a WARNINGS result but not be serious
enough to warrant marking the whole build with a FAILURE.
In addition, each Step produces its own results, may create logfiles, etc. However only the flags described above have any effect on the build as a whole.
The pre-defined BuildSteps like CVS
and Compile
have
reasonably appropriate flags set on them already. For example, without
a source tree there is no point in continuing the build, so the
CVS
class has the haltOnFailure
flag set to True. Look
in buildbot/process/step.py to see how the other Steps are
marked.
Each Step is created with an additional workdir
argument that
indicates where its actions should take place. This is specified as a
subdirectory of the slave builder's base directory, with a default
value of build
. This is only implemented as a step argument (as
opposed to simply being a part of the base directory) because the
CVS/SVN steps need to perform their checkouts from the parent
directory.
Several attributes from the BuildFactory are copied into each Build.
treeStableTimer
useProgress
The difference between a “full build” and a “quick build” is that
quick builds are generally done incrementally, starting with the tree
where the previous build was performed. That simply means that the
source-checkout step should be given a mode='update'
flag, to
do the source update in-place.
In addition to that, the useProgress
flag should be set to
False. Incremental builds will (or at least the ought to) compile as
few files as necessary, so they will take an unpredictable amount of
time to run. Therefore it would be misleading to claim to predict how
long the build will take.
Many projects use one of a few popular build frameworks to simplify the creation and maintenance of Makefiles or other compilation structures. Buildbot provides several pre-configured BuildFactory subclasses which let you build these projects with a minimum of fuss.
GNU Autoconf is a software portability tool, intended to make it possible to write programs in C (and other languages) which will run on a variety of UNIX-like systems. Most GNU software is built using autoconf. It is frequently used in combination with GNU automake. These tools both encourage a build process which usually looks like this:
% CONFIG_ENV=foo ./configure --with-flags % make all % make check # make install
(except of course the Buildbot always skips the make install
part).
The Buildbot's buildbot.process.factory.GNUAutoconf
factory is
designed to build projects which use GNU autoconf and/or automake. The
configuration environment variables, the configure flags, and command
lines used for the compile and test are all configurable, in general
the default values will be suitable.
Example:
# use the s() convenience function defined earlier f = factory.GNUAutoconf(source=s(step.SVN, svnurl=URL, mode="copy"), flags=["--disable-nls"])
Required Arguments:
source
Optional Arguments:
configure
./configure
. Accepts either a string or a list of shell argv
elements.
configureEnv
CFLAGS="-O2 -g"
(to turn off debug symbols during the compile).
Defaults to an empty dictionary.
configureFlags
["--without-x"]
to
disable windowing support. Defaults to an empty list.
compile
make all
. If set to
None, the compile step is skipped.
test
make check
. If set to
None, the test step is skipped.
Most Perl modules available from the CPAN
archive use the MakeMaker
module to provide configuration,
build, and test services. The standard build routine for these modules
looks like:
% perl Makefile.PL % make % make test # make install
(except again Buildbot skips the install step)
Buildbot provides a CPAN
factory to compile and test these
projects.
Arguments:
source
perl
perl
executable to use. Defaults
to just perl
.
Most Python modules use the distutils
package to provide
configuration and build services. The standard build process looks
like:
% python ./setup.py build % python ./setup.py install
Unfortunately, although Python provides a standard unit-test framework
named unittest
, to the best of my knowledge distutils
does not provide a standardized target to run such unit tests. (please
let me know if I'm wrong, and I will update this factory).
The Distutils
factory provides support for running the build
part of this process. It accepts the same source=
parameter as
the other build factories.
Arguments:
source
python
python
executable to use. Defaults
to just python
.
test
Twisted provides a unit test tool named trial
which provides a
few improvements over Python's built-in unittest
module. Many
python projects which use Twisted for their networking or application
services also use trial for their unit tests. These modules are
usually built and tested with something like the following:
% python ./setup.py build % PYTHONPATH=build/lib.linux-i686-2.3 trial -v PROJECTNAME.test % python ./setup.py install
Unfortunately, the build/lib directory into which the
built/copied .py files are placed is actually architecture-dependent,
and I do not yet know of a simple way to calculate its value. For many
projects it is sufficient to import their libraries “in place” from
the tree's base directory (PYTHONPATH=.
).
In addition, the PROJECTNAME value where the test files are
located is project-dependent: it is usually just the project's
top-level library directory, as common practice suggests the unit test
files are put in the test
sub-module. This value cannot be
guessed, the Trial
class must be told where to find the test
files.
The Trial
class provides support for building and testing
projects which use distutils and trial. If the test module name is
specified, trial will be invoked. The library path used for testing
can also be set.
One advantage of trial is that the Buildbot happens to know how to parse trial output, letting it identify which tests passed and which ones failed. The Buildbot can then provide fine-grained reports about how many tests have failed, when individual tests fail when they had been passing previously, etc.
Another feature of trial is that you can give it a series of source
.py files, and it will search them for special test-case-name
tags that indicate which test cases provide coverage for that file.
Trial can then run just the appropriate tests. This is useful for
quick builds, where you want to only run the test cases that cover the
changed functionality.
Arguments:
source
buildpython
python
executable to use when building the package. Defaults to just
['python']
. It may be useful to add flags here, to supress
warnings during compilation of extension modules. This list is
extended with ['./setup.py', 'build']
and then executed in a
ShellCommand.
testpath
PYTHONPATH
when running the unit
tests, if tests are being run. Defaults to .
to include the
project files in-place. The generated build library is frequently
architecture-dependent, but may simply be build/lib for
pure-python modules.
trialpython
trial
argument
below. It may be useful to add -W
flags here to supress
warnings that occur while tests are being run. Defaults to an empty
list, meaning trial
will be run without an explicit
interpreter, which is generally what you want if you're using
/usr/bin/trial instead of, say, the ./bin/trial that
lives in the Twisted source tree.
trial
trial
command. It is occasionally
useful to use an alternate executable, such as trial2.2
which
might run the tests under an older version of Python. Defaults to
trial
.
tests
PROJECTNAME.test
, or a
list of strings. Defaults to None, indicating that no tests should be
run. You must either set this or useTestCaseNames
to do anyting
useful with the Trial factory.
useTestCaseNames
randomly
True
, tells Trial (with the --random=0
argument) to
run the test cases in random order, which sometimes catches subtle
inter-test dependency bugs. Defaults to False
.
recurse
True
, tells Trial (with the --recurse
argument) to
look in all subdirectories for additional test cases. It isn't clear
to me how this works, but it may be useful to deal with the
unknown-PROJECTNAME problem described above, and is currently used in
the Twisted buildbot to accomodate the fact that test cases are now
distributed through multiple twisted.SUBPROJECT.test directories.
Unless one of trialModule
or useTestCaseNames
are set, no tests will be run.
Some quick examples follow. Most of these examples assume that the target python code (the “code under test”) can be reached directly from the root of the target tree, rather than being in a lib/ subdirectory.
# Trial(source, tests="toplevel.test") does: # python ./setup.py build # PYTHONPATH=. trial -to toplevel.test # Trial(source, tests=["toplevel.test", "other.test"]) does: # python ./setup.py build # PYTHONPATH=. trial -to toplevel.test other.test # Trial(source, useTestCaseNames=True) does: # python ./setup.py build # PYTHONPATH=. trial -to --testmodule=foo/bar.py.. (from Changes) # Trial(source, buildpython=["python2.3", "-Wall"], tests="foo.tests"): # python2.3 -Wall ./setup.py build # PYTHONPATH=. trial -to foo.tests # Trial(source, trialpython="python2.3", trial="/usr/bin/trial", # tests="foo.tests") does: # python2.3 -Wall ./setup.py build # PYTHONPATH=. python2.3 /usr/bin/trial -to foo.tests # For running trial out of the tree being tested (only useful when the # tree being built is Twisted itself): # Trial(source, trialpython=["python2.3", "-Wall"], trial="./bin/trial", # tests="foo.tests") does: # python2.3 -Wall ./setup.py build # PYTHONPATH=. python2.3 -Wall ./bin/trial -to foo.tests
If the output directory of ./setup.py build
is known, you can
pull the python code from the built location instead of the source
directories. This should be able to handle variations in where the
source comes from, as well as accomodating binary extension modules:
# Trial(source,tests="toplevel.test",testpath='build/lib.linux-i686-2.3') # does: # python ./setup.py build # PYTHONPATH=build/lib.linux-i686-2.3 trial -to toplevel.test
More details are available in the docstrings for each class, use
pydoc buildbot.status.html.Waterfall
to see them. Most status
delivery objects take a categories=
argument, which can contain
a list of “category” names: in this case, it will only show status
for Builders that are in one of the named categories.
(implementor's note: each of these objects should be a
service.MultiService which will be attached to the BuildMaster object
when the configuration is processed. They should use
self.parent.getStatus()
to get access to the top-level IStatus
object.)
from buildbot.status import html w = html.Waterfall(http_port=8080) c['status'].append(w)
The buildbot.status.html.Waterfall
status target creates an
HTML “waterfall display”, which shows a time-based chart of events.
This display provides detailed information about all steps of all
recent builds, and provides hyperlinks to look at individual build
logs and source changes. If the http_port
argument is provided,
it represents the TCP port number on which the web server should
listen. If instead distrib_port
is provided, a twisted.web
distributed server will be started either on a TCP port (if
distrib_port
is an int) or more likely on a UNIX socket (if
distrib_port
is a string). The HTML page can have a favicon and
custom CSS: see the docstring for details.
The distrib_port
option means that, on a host with a
suitably-configured twisted-web server, you do not need to consume a
separate TCP port for the buildmaster's status web page. When the web
server is constructed with mktap web --user
, URLs that point to
http://host/~username/
are dispatched to a sub-server that is
listening on a UNIX socket at ~username/.twisted-web-pb
. On
such a system, it is convenient to create a dedicated buildbot
user, then set distrib_port
to
os.path.expanduser("~/.twistd-web-pb")
. This configuration will
make the HTML status page available at http://host/~buildbot/
.
Suitable URL remapping can make it appear at
http://host/buildbot/
, and the right virtual host setup can
even place it at http://buildbot.host/
.
The buildbot.status.words.IRC
status target creates an IRC bot
which will attach to certain channels and be available for status
queries. It can also be asked to announce builds as they occur, or be
told to shut up.
from twisted.status import words irc = words.IRC("irc.example.org", "botnickname", channels=["channel1", "channel2"]) c['status'].append(irc)
Take a look at the docstring for words.IRC
for more details on
configuring this service.
To use the service, you address messages at the buildbot, either
normally (botnickname: status
) or with private messages
(/msg botnickname status
). The buildbot will respond in kind.
Some of the commands currently available:
list builders
status BUILDER
status all
watch BUILDER
last BUILDER
help COMMAND
help commands
to get a list of known
commands.
source
version
If the allowForce=True
option was used, some addtional commands
will be available:
force build BUILDER REASON
stop build BUILDER REASON
buildbot.status.client.PBListener(port=int, user=str,
passwd=str)
This sets up a PB listener on the given TCP port, to which a PB-based
status client can connect and retrieve status information.
buildbot statusgui
is an example of such a status client.
The Buildbot's home page is at http://buildbot.sourceforge.net/
For configuration questions and general discussion, please use the
buildbot-devel
mailing list. The subscription instructions and
archives are available at
http://lists.sourceforge.net/lists/listinfo/buildbot-devel
[1] except Darcs, but since the Buildbot never changes its local source tree we can ignore the fact that Darcs uses a less centralized model
[2] many VC systems provide more complexity than this: in particular the local views that P4 and ClearCase can assemble out of various source directories are more complex than we're prepared to take advantage of here
[3] although this checkoutDelay
can be overridden
[4] To be precise, it is a list of objects which all
implement the buildbot.interfaces.IChangeSource
Interface