Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
This is the Buildbot documentation for Buildbot version 2.4.1.
If you are evaluating Buildbot and would like to get started quickly, start with the Tutorial. Regular users of Buildbot should consult the Manual, and those wishing to modify Buildbot directly will want to be familiar with the Developer’s Documentation.
Table Of Contents¶
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
1. Buildbot Tutorial¶
Contents:
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
1.1. First Run¶
1.1.1. Goal¶
This tutorial will take you from zero to running your first buildbot master and worker as quickly as possible, without changing the default configuration.
This tutorial is all about instant gratification and the five minute experience: in five minutes we want to convince you that this project works, and that you should seriously consider spending time learning the system. In this tutorial no configuration or code changes are done.
This tutorial assumes that you are running Unix, but might be adaptable to Windows.
Thanks to virtualenv, installing buildbot in a standalone environment is very easy. For those more familiar with Docker, there also exists a docker version of these instructions.
You should be able to cut and paste each shell block from this tutorial directly into a terminal.
1.1.2. Getting ready¶
There are many ways to get the code on your machine.
We will use the easiest one: via pip
in a virtualenv.
It has the advantage of not polluting your operating system, as everything will be contained in the virtualenv.
To make this work, you will need the following installed:
- Python and the development packages for it
- virtualenv
Preferably, use your distribution package manager to install these.
You will also need a working Internet connection, as virtualenv and pip will need to download other projects from the Internet. The master and builder daemons will need to be able to connect to github.com
via HTTPS to fetch the repo we’re testing; if you need to use a proxy for this ensure that either the HTTPS_PROXY
or ALL_PROXY
environment variable is set to your proxy, e.g., by executing export HTTPS_PROXY=http://localhost:9080
in the shell before starting each daemon.
Note
Buildbot does not require root access. Run the commands in this tutorial as a normal, unprivileged user.
1.1.3. Creating a master¶
The first necessary step is to create a virtualenv for our master. We will also use a separate directory to demonstrate the distinction between a master and worker:
mkdir -p ~/tmp/bb-master
cd ~/tmp/bb-master
On Python 3:
python3 -m venv sandbox
source sandbox/bin/activate
Now that we are ready, we need to install buildbot:
pip install --upgrade pip
pip install 'buildbot[bundle]'
Now that buildbot is installed, it’s time to create the master:
buildbot create-master master
Buildbot’s activity is controlled by a configuration file. We will use the sample configuration file unchanged:
mv master/master.cfg.sample master/master.cfg
Finally, start the master:
buildbot start master
You will now see some log information from the master in this terminal. It should end with lines like these:
2014-11-01 15:52:55+0100 [-] BuildMaster is running
The buildmaster appears to have (re)started correctly.
From now on, feel free to visit the web status page running on the port 8010: http://localhost:8010/
Our master now needs (at least) a worker to execute its commands. For that, head on to the next section!
1.1.4. Creating a worker¶
The worker will be executing the commands sent by the master. In this tutorial, we are using the buildbot/hello-world project as an example. As a consequence of this, your worker will need access to the git command in order to checkout some code. Be sure that it is installed, or the builds will fail.
Same as we did for our master, we will create a virtualenv for our worker next to the other one. It would however be completely ok to do this on another computer - as long as the worker computer is able to connect to the master one:
mkdir -p ~/tmp/bb-worker
cd ~/tmp/bb-worker
On Python 2:
virtualenv --no-site-packages sandbox
source sandbox/bin/activate
On Python 3:
python3 -m venv sandbox
source sandbox/bin/activate
Install the buildbot-worker
command:
pip install --upgrade pip
pip install buildbot-worker
# required for `runtests` build
pip install setuptools-trial
Now, create the worker:
buildbot-worker create-worker worker localhost example-worker pass
Note
If you decided to create this from another computer, you should replace localhost
with the name of the computer where your master is running.
The username (example-worker
), and password (pass
) should be the same as those in master/master.cfg
; verify this is the case by looking at the section for c['workers']
:
cat ../bb-master/master/master.cfg
And finally, start the worker:
buildbot-worker start worker
Check the worker’s output. It should end with lines like these:
2014-11-01 15:56:51+0100 [-] Connecting to localhost:9989
2014-11-01 15:56:51+0100 [Broker,client] message from master: attached
The worker appears to have (re)started correctly.
Meanwhile, from the other terminal, in the master log (twisted.log
in the master directory), you should see lines like these:
2014-11-01 15:56:51+0100 [Broker,1,127.0.0.1] worker 'example-worker' attaching from IPv4Address(TCP, '127.0.0.1', 54015)
2014-11-01 15:56:51+0100 [Broker,1,127.0.0.1] Got workerinfo from 'example-worker'
2014-11-01 15:56:51+0100 [-] bot attached
You should now be able to go to http://localhost:8010, where you will see a web page similar to:
Click on “Builds” at the left to open the submenu and then Builders to see that the worker you just started has connected to the master:
Your master is now quietly waiting for new commits to hello-world. This doesn’t happen very often though. In the next section, we’ll see how to manually start a build.
We just wanted to get you to dip your toes in the water. It’s easy to take your first steps, but this is about as far as we can go without touching the configuration.
You’ve got a taste now, but you’re probably curious for more. Let’s step it up a little in the second tutorial by changing the configuration and doing an actual build. Continue on to A Quick Tour.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
1.2. First Buildbot run with Docker¶
Note
Docker can be tricky to get working correctly if you haven’t used it before. If you’re having trouble, first determine whether it is a Buildbot issue or a Docker issue by running:
docker run ubuntu:12.04 apt-get update
If that fails, look for help with your Docker install. On the other hand, if that succeeds, then you may have better luck getting help from members of the Buildbot community.
Docker is a tool that makes building and deploying custom environments a breeze. It uses lightweight linux containers (LXC) and performs quickly, making it a great instrument for the testing community. The next section includes a Docker pre-flight check. If it takes more that 3 minutes to get the ‘Success’ message for you, try the Buildbot pip-based first run instead.
1.2.1. Current Docker dependencies¶
- Linux system, with at least kernel 3.8 and AUFS support. For example, Standard Ubuntu, Debian and Arch systems.
- Packages: lxc, iptables, ca-certificates, and bzip2 packages.
- Local clock on time or slightly in the future for proper SSL communication.
- This tutorial uses docker-compose to run a master, a worker, and a postgresql database server
1.2.2. Installation¶
Use the Docker installation instructions for your operating system.
Make sure you install docker-compose. As root or inside a virtualenv, run:
pip install docker-compose
Test docker is happy in your environment:
sudo docker run -i busybox /bin/echo Success
1.2.3. Building and running Buildbot¶
# clone the example repository
git clone --depth 1 https://github.com/buildbot/buildbot-docker-example-config
# Build the Buildbot container (it will take a few minutes to download packages)
cd buildbot-docker-example-config/simple
docker-compose up
You should now be able to go to http://localhost:8010 and see a web page similar to:
Click on “Builds” at the left to open the submenu and then Builders to see that the worker you just started has connected to the master:
1.2.4. Overview of the docker-compose configuration¶
This docker-compose configuration is made as a basis for what you would put in production
- Separated containers for each component
- A solid database backend with postgresql
- A buildbot master that exposes its configuration to the docker host
- A buildbot worker that can be cloned in order to add additional power
- Containers are linked together so that the only port exposed to external is the web server
- The default master container is based on Alpine linux for minimal footprint
- The default worker container is based on more widely known Ubuntu distribution, as this is the container you want to customize.
- Download the config from a tarball accessible via a web server
1.2.5. Playing with your Buildbot containers¶
If you’ve come this far, you have a Buildbot environment that you can freely experiment with.
In order to modify the configuration, you need to fork the project on github https://github.com/buildbot/buildbot-docker-example-config Then you can clone your own fork, and start the docker-compose again.
To modify your config, edit the master.cfg file, commit your changes, and push to your fork.
You can use the command buildbot check-config in order to make sure the config is valid before the push.
You will need to change docker-compose.yml
the variable BUILDBOT_CONFIG_URL
in order to point to your github fork.
The BUILDBOT_CONFIG_URL
may point to a .tar.gz
file accessible from HTTP.
Several git servers like github can generate that tarball automatically from the master branch of a git repository
If the BUILDBOT_CONFIG_URL
does not end with .tar.gz
, it is considered to be the URL to a master.cfg
file accessible from HTTP.
1.2.6. Customize your Worker container¶
It is advised to customize you worker container in order to suit your project’s build dependencies and need. An example DockerFile is available which the buildbot community uses for its own CI purposes:
https://github.com/buildbot/metabbotcfg/blob/nine/docker/metaworker/Dockerfile
1.2.7. Multi-master¶
A multi-master environment can be setup using the multimaster/docker-compose.yml
file in the example repository
# Build the Buildbot container (it will take a few minutes to download packages) cd buildbot-docker-example-config/simple docker-compose up -d docker-compose scale buildbot=4
1.2.8. Going forward¶
You’ve got a taste now, but you’re probably curious for more. Let’s step it up a little in the second tutorial by changing the configuration and doing an actual build. Continue on to A Quick Tour.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
1.3. A Quick Tour¶
1.3.1. Goal¶
This tutorial will expand on the First Run tutorial by taking a quick tour around some of the features of buildbot that are hinted at in the comments in the sample configuration. We will simply change parts of the default configuration and explain the activated features.
As a part of this tutorial, we will make buildbot do a few actual builds.
This section will teach you how to:
- make simple configuration changes and activate them
- deal with configuration errors
- force builds
- enable and control the IRC bot
- enable ssh debugging
- add a ‘try’ scheduler
1.3.2. Setting Project Name and URL¶
Let’s start simple by looking at where you would customize the buildbot’s project name and URL.
We continue where we left off in the First Run tutorial.
Open a new terminal, and first enter the same sandbox you created before (where $EDITOR
is your editor of choice like vim, gedit, or emacs):
cd ~/tmp/bb-master
source sandbox/bin/activate
$EDITOR master/master.cfg
Now, look for the section marked PROJECT IDENTITY which reads:
####### PROJECT IDENTITY
# the 'title' string will appear at the top of this buildbot installation's
# home pages (linked to the 'titleURL').
c['title'] = "Hello World CI"
c['titleURL'] = "https://buildbot.github.io/hello-world/"
If you want, you can change either of these links to anything you want to see what happens when you change them.
After making a change go into the terminal and type:
buildbot reconfig master
You will see a handful of lines of output from the master log, much like this:
2011-12-04 10:11:09-0600 [-] loading configuration from /home/dustin/tmp/buildbot/master/master.cfg
2011-12-04 10:11:09-0600 [-] configuration update started
2011-12-04 10:11:09-0600 [-] builder runtests is unchanged
2011-12-04 10:11:09-0600 [-] removing IStatusReceiver <WebStatus on port tcp:8010 at 0x2aee368>
2011-12-04 10:11:09-0600 [-] (TCP Port 8010 Closed)
2011-12-04 10:11:09-0600 [-] Stopping factory <buildbot.status.web.baseweb.RotateLogSite instance at 0x2e36638>
2011-12-04 10:11:09-0600 [-] adding IStatusReceiver <WebStatus on port tcp:8010 at 0x2c2d950>
2011-12-04 10:11:09-0600 [-] RotateLogSite starting on 8010
2011-12-04 10:11:09-0600 [-] Starting factory <buildbot.status.web.baseweb.RotateLogSite instance at 0x2e36e18>
2011-12-04 10:11:09-0600 [-] Setting up http.log rotating 10 files of 10000000 bytes each
2011-12-04 10:11:09-0600 [-] WebStatus using (/home/dustin/tmp/buildbot/master/public_html)
2011-12-04 10:11:09-0600 [-] removing 0 old schedulers, updating 0, and adding 0
2011-12-04 10:11:09-0600 [-] adding 1 new changesources, removing 1
2011-12-04 10:11:09-0600 [-] gitpoller: using workdir '/home/dustin/tmp/buildbot/master/gitpoller-workdir'
2011-12-04 10:11:09-0600 [-] GitPoller repository already exists
2011-12-04 10:11:09-0600 [-] configuration update complete
Reconfiguration appears to have completed successfully.
The important lines are the ones telling you that it is loading the new configuration at the top, and the one at the bottom saying that the update is complete.
Now, if you go back to the waterfall page, you will see that the project’s name is whatever you may have changed it to and when you click on the URL of the project name at the bottom of the page it should take you to the link you put in the configuration.
1.3.3. Configuration Errors¶
It is very common to make a mistake when configuring buildbot, so you might as well see now what happens in that case and what you can do to fix the error.
Open up the config again and introduce a syntax error by removing the first single quote in the two lines you changed, so they read:
c[title'] = "Hello World CI"
c[titleURL'] = "https://buildbot.github.io/hello-world/"
This creates a Python SyntaxError
.
Now go ahead and reconfig the buildmaster:
buildbot reconfig master
This time, the output looks like:
2015-08-14 18:40:46+0000 [-] beginning configuration update
2015-08-14 18:40:46+0000 [-] Loading configuration from '/data/buildbot/master/master.cfg'
2015-08-14 18:40:46+0000 [-] error while parsing config file:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/buildbot/master.py", line 265, in reconfig
d = self.doReconfig()
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1274, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python2.7/dist-packages/buildbot/master.py", line 289, in doReconfig
self.configFileName)
--- <exception caught here> ---
File "/usr/local/lib/python2.7/dist-packages/buildbot/config.py", line 156, in loadConfig
exec f in localDict
exceptions.SyntaxError: EOL while scanning string literal (master.cfg, line 103)
2015-08-14 18:40:46+0000 [-] error while parsing config file: EOL while scanning string literal (master.cfg, line 103) (traceback in logfile)
2015-08-14 18:40:46+0000 [-] reconfig aborted without making any changes
Reconfiguration failed. Please inspect the master.cfg file for errors,
correct them, then try 'buildbot reconfig' again.
This time, it’s clear that there was a mistake in the configuration. Luckily, the Buildbot master will ignore the wrong configuration and keep running with the previous configuration.
The message is clear enough, so open the configuration again, fix the error, and reconfig the master.
1.3.4. Your First Build¶
By now you’re probably thinking: “All this time spent and still not done a single build? What was the name of this project again?”
On the Builders page, click on the runtests link. You’ll see a builder page, and a blue “force” button that will bring up the following dialog box:
Click Start Build - there’s no need to fill in any of the fields in this case. Next, click on view in waterfall.
You will now see:
1.3.5. Enabling the IRC Bot¶
Buildbot includes an IRC bot that you can tell to join a channel and control to report on the status of buildbot.
Note
Security Note
Please note that any user having access to your irc channel or can PM the bot will be able to create or stop builds bug #3377.
First, start an IRC client of your choice, connect to irc.freenode.net and join an empty channel.
In this example we will use #buildbot-test
, so go join that channel.
(Note: please do not join the main buildbot channel!)
Edit master.cfg
and look for the BUILDBOT SERVICES section.
At the end of that section add the lines:
c['services'].append(reporters.IRC(host="irc.freenode.net", nick="bbtest",
channels=["#buildbot-test"]))
Reconfigure the build master then do:
grep -i irc master/twistd.log
The log output should contain a line like this:
2016-11-13 15:53:06+0100 [-] Starting factory <buildbot.reporters.irc.IrcStatusFactory instance at 0x7ff2b4b72710>
2016-11-13 15:53:19+0100 [IrcStatusBot,client] <buildbot.reporters.irc.IrcStatusBot object at 0x7ff2b5075750>: I have joined #buildbot-test
You should see the bot now joining in your IRC client. In your IRC channel, type:
bbtest: commands
to get a list of the commands the bot supports.
Let’s tell the bot to notify certain events, to learn which EVENTS we can notify on:
bbtest: help notify
Now let’s set some event notifications:
<@lsblakk> bbtest: notify on started finished failure
< bbtest> The following events are being notified: ['started', 'failure', 'finished']
Now, go back to the web interface and force another build. Alternatively, ask the bot to force a build:
<@lsblakk> bbtest: force build --codebase= runtests
< bbtest> build #1 of runtests started
< bbtest> Hey! build runtests #1 is complete: Success [finished]
You can also see the new builds in the web interface.
The full documentation is available at IRC
.
1.3.6. Setting Authorized Web Users¶
The default configuration allows everyone to perform any task like creating or stopping builds via the web interface. To restrict this to a user, look for:
c['www'] = dict(port=8010,
plugins=dict(waterfall_view={}, console_view={}))
and append:
c['www']['authz'] = util.Authz(
allowRules = [
util.AnyEndpointMatcher(role="admins")
],
roleMatchers = [
util.RolesFromUsername(roles=['admins'], usernames=['Alice'])
]
)
c['www']['auth'] = util.UserPasswordAuth([('Alice','Password1')])
For more details, see Authentication plugins.
1.3.7. Debugging with Manhole¶
You can do some debugging by using manhole, an interactive Python shell. It exposes full access to the buildmaster’s account (including the ability to modify and delete files), so it should not be enabled with a weak or easily guessable password.
To use this you will need to install an additional package or two to your virtualenv:
cd ~/tmp/bb-master
source sandbox/bin/activate
pip install -U pip
pip install cryptography pyasn1
You will also need to generate an SSH host key for the Manhole server.
mkdir -p /data/ssh_host_keys
ckeygen -t rsa -f /data/ssh_host_keys/ssh_host_rsa_key
In your master.cfg find:
c = BuildmasterConfig = {}
Insert the following to enable debugging mode with manhole:
####### DEBUGGING
from buildbot import manhole
c['manhole'] = manhole.PasswordManhole("tcp:1234:interface=127.0.0.1","admin","passwd", ssh_hostkey_dir="/data/ssh_host_keys/")
After restarting the master, you can ssh into the master and get an interactive Python shell:
ssh -p1234 admin@127.0.0.1
# enter passwd at prompt
Note
The pyasn1-0.1.1 release has a bug which results in an exception similar to this on startup:
exceptions.TypeError: argument 2 must be long, not int
If you see this, the temporary solution is to install the previous version of pyasn1:
pip install pyasn1-0.0.13b
If you wanted to check which workers are connected and what builders those workers are assigned to you could do:
>>> master.workers.workers
{'example-worker': <Worker 'example-worker', current builders: runtests>}
Objects can be explored in more depth using dir(x) or the helper function show(x).
1.3.8. Adding a ‘try’ scheduler¶
Buildbot includes a way for developers to submit patches for testing without committing them to the source code control system. (This is really handy for projects that support several operating systems or architectures.)
To set this up, add the following lines to master.cfg:
from buildbot.scheduler import Try_Userpass
c['schedulers'] = []
c['schedulers'].append(Try_Userpass(
name='try',
builderNames=['runtests'],
port=5555,
userpass=[('sampleuser','samplepass')]))
Then you can submit changes using the try
command.
Let’s try this out by making a one-line change to hello-world, say, to make it trace the tree by default:
git clone https://github.com/buildbot/hello-world.git hello-world-git
cd hello-world-git/hello
$EDITOR __init__.py
# change 'return "hello " + who' on line 6 to 'return "greets " + who'
Then run buildbot’s try
command as follows:
cd ~/tmp/bb-master
source sandbox/bin/activate
buildbot try --connect=pb --master=127.0.0.1:5555 --username=sampleuser --passwd=samplepass --vc=git
This will do git diff
for you and send the resulting patch to the server for build and test against the latest sources from Git.
Now go back to the waterfall page, click on the runtests link, and scroll down. You should see that another build has been started with your change (and stdout for the tests should be chock-full of parse trees as a result). The “Reason” for the job will be listed as “‘try’ job”, and the blamelist will be empty.
To make yourself show up as the author of the change, use the --who=emailaddr
option on buildbot try
to pass your email address.
To make a description of the change show up, use the --properties=comment="this is a comment"
option on buildbot try
.
To use ssh instead of a private username/password database, see Try_Jobdir
.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
1.4. Further Reading¶
See the following user-contributed tutorials for other highlights and ideas:
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
1.4.1. Buildbot in 5 minutes - a user-contributed tutorial¶
(Ok, maybe 10.)
Buildbot is really an excellent piece of software, however it can be a bit confusing for a newcomer (like me when I first started looking at it). Typically, at first sight it looks like a bunch of complicated concepts that make no sense and whose relationships with each other are unclear. After some time and some reread, it all slowly starts to be more and more meaningful, until you finally say “oh!” and things start to make sense. Once you get there, you realize that the documentation is great, but only if you already know what it’s about.
This is what happened to me, at least. Here I’m going to (try to) explain things in a way that would have helped me more as a newcomer. The approach I’m taking is more or less the reverse of that used by the documentation, that is, I’m going to start from the components that do the actual work (the builders) and go up the chain from there up to change sources. I hope purists will forgive this unorthodoxy. Here I’m trying to clarify the concepts only, and will not go into the details of each object or property; the documentation explains those quite well.
1.4.1.1. Installation¶
I won’t cover the installation; both Buildbot master and worker are available as packages for the major distributions, and in any case the instructions in the official documentation are fine. This document will refer to Buildbot 0.8.5 which was current at the time of writing, but hopefully the concepts are not too different in other versions. All the code shown is of course python code, and has to be included in the master.cfg master configuration file.
We won’t cover the basic things such as how to define the workers, project names, or other administrative information that is contained in that file; for that, again the official documentation is fine.
1.4.1.2. Builders: the workhorses¶
Since Buildbot is a tool whose goal is the automation of software builds, it makes sense to me to start from where we tell Buildbot how to build our software: the builder (or builders, since there can be more than one).
Simply put, a builder is an element that is in charge of performing some action or sequence of actions, normally something related to building software (for example, checking out the source, or make all
), but it can also run arbitrary commands.
A builder is configured with a list of workers that it can use to carry out its task.
The other fundamental piece of information that a builder needs is, of course, the list of things it has to do (which will normally run on the chosen worker).
In Buildbot, this list of things is represented as a BuildFactory
object, which is essentially a sequence of steps, each one defining a certain operation or command.
Enough talk, let’s see an example.
For this example, we are going to assume that our super software project can be built using a simple make all
, and there is another target make packages
that creates rpm, deb and tgz packages of the binaries.
In the real world things are usually more complex (for example there may be a configure
step, or multiple targets), but the concepts are the same; it will just be a matter of adding more steps to a builder, or creating multiple builders, although sometimes the resulting builders can be quite complex.
So to perform a manual build of our project we would type this from the command line (assuming we are at the root of the local copy of the repository):
$ make clean # clean remnants of previous builds
...
$ svn update
...
$ make all
...
$ make packages
...
# optional but included in the example: copy packages to some central machine
$ scp packages/*.rpm packages/*.deb packages/*.tgz someuser@somehost:/repository
...
Here we’re assuming the repository is SVN, but again the concepts are the same with git, mercurial or any other VCS.
Now, to automate this, we create a builder where each step is one of the commands we typed above. A step can be a shell command object, or a dedicated object that checks out the source code (there are various types for different repositories, see the docs for more info), or yet something else:
from buildbot.plugins import steps, util
# first, let's create the individual step objects
# step 1: make clean; this fails if the worker has no local copy, but
# is harmless and will only happen the first time
makeclean = steps.ShellCommand(name="make clean",
command=["make", "clean"],
description="make clean")
# step 2: svn update (here updates trunk, see the docs for more
# on how to update a branch, or make it more generic).
checkout = steps.SVN(baseURL='svn://myrepo/projects/coolproject/trunk',
mode="update",
username="foo",
password="bar",
haltOnFailure=True)
# step 3: make all
makeall = steps.ShellCommand(name="make all",
command=["make", "all"],
haltOnFailure=True,
description="make all")
# step 4: make packages
makepackages = steps.ShellCommand(name="make packages",
command=["make", "packages"],
haltOnFailure=True,
description="make packages")
# step 5: upload packages to central server. This needs passwordless ssh
# from the worker to the server (set it up in advance as part of worker setup)
uploadpackages = steps.ShellCommand(name="upload packages",
description="upload packages",
command="scp packages/*.rpm packages/*.deb packages/*.tgz someuser@somehost:/repository",
haltOnFailure=True)
# create the build factory and add the steps to it
f_simplebuild = util.BuildFactory()
f_simplebuild.addStep(makeclean)
f_simplebuild.addStep(checkout)
f_simplebuild.addStep(makeall)
f_simplebuild.addStep(makepackages)
f_simplebuild.addStep(uploadpackages)
# finally, declare the list of builders. In this case, we only have one builder
c['builders'] = [
util.BuilderConfig(name="simplebuild", workernames=['worker1', 'worker2', 'worker3'], factory=f_simplebuild)
]
So our builder is called simplebuild
and can run on either of worker1
, worker2
and worker3
.
If our repository has other branches besides trunk, we could create another one or more builders to build them; in the example, only the checkout step would be different, in that it would need to check out the specific branch.
Depending on how exactly those branches have to be built, the shell commands may be recycled, or new ones would have to be created if they are different in the branch.
You get the idea.
The important thing is that all the builders be named differently and all be added to the c['builders']
value (as can be seen above, it is a list of BuilderConfig
objects).
Of course the type and number of steps will vary depending on the goal; for example, to just check that a commit doesn’t break the build, we could include just up to the make all
step.
Or we could have a builder that performs a more thorough test by also doing make test
or other targets.
You get the idea.
Note that at each step except the very first we use haltOnFailure=True
because it would not make sense to execute a step if the previous one failed (ok, it wouldn’t be needed for the last step, but it’s harmless and protects us if one day we add another step after it).
1.4.1.3. Schedulers¶
Now this is all nice and dandy, but who tells the builder (or builders) to run, and when? This is the job of the scheduler, which is a fancy name for an element that waits for some event to happen, and when it does, based on that information decides whether and when to run a builder (and which one or ones). There can be more than one scheduler. I’m being purposely vague here because the possibilities are almost endless and highly dependent on the actual setup, build purposes, source repository layout and other elements.
So a scheduler needs to be configured with two main pieces of information: on one hand, which events to react to, and on the other hand, which builder or builders to trigger when those events are detected. (It’s more complex than that, but if you understand this, you can get the rest of the details from the docs).
A simple type of scheduler may be a periodic scheduler: when a configurable amount of time has passed, run a certain builder (or builders). In our example, that’s how we would trigger a build every hour:
from buildbot.plugins import schedulers
# define the periodic scheduler
hourlyscheduler = schedulers.Periodic(name="hourly",
builderNames=["simplebuild"],
periodicBuildTimer=3600)
# define the available schedulers
c['schedulers'] = [hourlyscheduler]
That’s it.
Every hour this hourly
scheduler will run the simplebuild
builder.
If we have more than one builder that we want to run every hour, we can just add them to the builderNames
list when defining the scheduler and they will all be run.
Or since multiple scheduler are allowed, other schedulers can be defined and added to c['schedulers']
in the same way.
Other types of schedulers exist; in particular, there are schedulers that can be more dynamic than the periodic one. The typical dynamic scheduler is one that learns about changes in a source repository (generally because some developer checks in some change), and triggers one or more builders in response to those changes. Let’s assume for now that the scheduler “magically” learns about changes in the repository (more about this later); here’s how we would define it:
from buildbot.plugins import schedulers
# define the dynamic scheduler
trunkchanged = schedulers.SingleBranchScheduler(name="trunkchanged",
change_filter=util.ChangeFilter(branch=None),
treeStableTimer=300,
builderNames=["simplebuild"])
# define the available schedulers
c['schedulers'] = [trunkchanged]
This scheduler receives changes happening to the repository, and among all of them, pays attention to those happening in “trunk” (that’s what branch=None
means).
In other words, it filters the changes to react only to those it’s interested in.
When such changes are detected, and the tree has been quiet for 5 minutes (300 seconds), it runs the simplebuild
builder.
The treeStableTimer
helps in those situations where commits tend to happen in bursts, which would otherwise result in multiple build requests queuing up.
What if we want to act on two branches (say, trunk and 7.2)? First we create two builders, one for each branch (see the builders paragraph above), then we create two dynamic schedulers:
from buildbot.plugins import schedulers
# define the dynamic scheduler for trunk
trunkchanged = schedulers.SingleBranchScheduler(name="trunkchanged",
change_filter=util.ChangeFilter(branch=None),
treeStableTimer=300,
builderNames=["simplebuild-trunk"])
# define the dynamic scheduler for the 7.2 branch
branch72changed = schedulers.SingleBranchScheduler(name="branch72changed",
change_filter=util.ChangeFilter(branch='branches/7.2'),
treeStableTimer=300,
builderNames=["simplebuild-72"])
# define the available schedulers
c['schedulers'] = [trunkchanged, branch72changed]
The syntax of the change filter is VCS-dependent (above is for SVN), but again once the idea is clear, the documentation has all the details.
Another feature of the scheduler is that it can be told which changes, within those it’s paying attention to, are important and which are not.
For example, there may be a documentation directory in the branch the scheduler is watching, but changes under that directory should not trigger a build of the binary.
This finer filtering is implemented by means of the fileIsImportant
argument to the scheduler (full details in the docs and - alas - in the sources).
1.4.1.4. Change sources¶
Earlier we said that a dynamic scheduler “magically” learns about changes; the final piece of the puzzle are change sources, which are precisely the elements in Buildbot whose task is to detect changes in the repository and communicate them to the schedulers. Note that periodic schedulers don’t need a change source, since they only depend on elapsed time; dynamic schedulers, on the other hand, do need a change source.
A change source is generally configured with information about a source repository (which is where changes happen); a change source can watch changes at different levels in the hierarchy of the repository, so for example it is possible to watch the whole repository or a subset of it, or just a single branch. This determines the extent of the information that is passed down to the schedulers.
There are many ways a change source can learn about changes; it can periodically poll the repository for changes, or the VCS can be configured (for example through hook scripts triggered by commits) to push changes into the change source. While these two methods are probably the most common, they are not the only possibilities; it is possible for example to have a change source detect changes by parsing some email sent to a mailing list when a commit happens, and yet other methods exist. The manual again has the details.
To complete our example, here’s a change source that polls a SVN repository every 2 minutes:
from buildbot.plugins import changes, util
svnpoller = changes.SVNPoller(repourl="svn://myrepo/projects/coolproject",
svnuser="foo",
svnpasswd="bar",
pollinterval=120,
split_file=util.svn.split_file_branches)
c['change_source'] = svnpoller
This poller watches the whole “coolproject” section of the repository, so it will detect changes in all the branches. We could have said:
repourl = "svn://myrepo/projects/coolproject/trunk"
or:
repourl = "svn://myrepo/projects/coolproject/branches/7.2"
to watch only a specific branch.
To watch another project, you need to create another change source – and you need to filter changes by project. For instance, when you add a change source watching project ‘superproject’ to the above example, you need to change:
trunkchanged = schedulers.SingleBranchScheduler(name="trunkchanged",
change_filter=filter.ChangeFilter(branch=None),
# ...
)
to e.g.:
trunkchanged = schedulers.SingleBranchScheduler(name="trunkchanged",
change_filter=filter.ChangeFilter(project="coolproject", branch=None),
# ...
)
else coolproject will be built when there’s a change in superproject.
Since we’re watching more than one branch, we need a method to tell in which branch the change occurred when we detect one.
This is what the split_file
argument does, it takes a callable that Buildbot will call to do the job.
The split_file_branches function, which comes with Buildbot, is designed for exactly this purpose so that’s what the example above uses.
And of course this is all SVN-specific, but there are pollers for all the popular VCSs.
But note: if you have many projects, branches, and builders it probably pays to not hardcode all the schedulers and builders in the configuration, but generate them dynamically starting from list of all projects, branches, targets etc. and using loops to generate all possible combinations (or only the needed ones, depending on the specific setup), as explained in the documentation chapter about Customization.
1.4.1.5. Reporters¶
Now that the basics are in place, let’s go back to the builders, which is where the real work happens. Reporters are simply the means Buildbot uses to inform the world about what’s happening, that is, how builders are doing. There are many reporters: a mail notifier, an IRC notifier, and others. They are described fairly well in the manual.
One thing I’ve found useful is the ability to pass a domain name as the lookup argument to a mailNotifier
, which allows you to take an unqualified username as it appears in the SVN change and create a valid email address by appending the given domain name to it:
from buildbot.plugins import reporter
# if jsmith commits a change, mail for the build is sent to jsmith@example.org
notifier = reporter.MailNotifier(fromaddr="buildbot@example.org",
sendToInterestedUsers=True,
lookup="example.org")
c['reporters'].append(notifier)
The mail notifier can be customized at will by means of the messageFormatter
argument, which is a class that Buildbot calls to format the body of the email, and to which it makes available lots of information about the build.
For more details, look into the Reporters section of the Buildbot manual.
1.4.1.6. Conclusion¶
Please note that this article has just scratched the surface; given the complexity of the task of build automation, the possibilities are almost endless. So there’s much, much more to say about Buildbot. However, hopefully this is a preparation step before reading the official manual. Had I found an explanation as the one above when I was approaching Buildbot, I’d have had to read the manual just once, rather than multiple times. Hope this can help someone else.
(Thanks to Davide Brini for permission to include this tutorial, derived from one he originally posted at http://backreference.org .)
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
This is the Buildbot manual for Buildbot version 2.4.1.
2. Buildbot Manual¶
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.1. Introduction¶
Buildbot is a system to automate the compile/test cycle required by most software projects to validate code changes. By automatically rebuilding and testing the tree each time something has changed, build problems are pinpointed quickly, before other developers are inconvenienced by the failure. The guilty developer can be identified and harassed without human intervention. By running the builds on a variety of platforms, developers who do not have the facilities to test their changes everywhere before checkin will at least know shortly afterwards whether they have broken the build or not. Warning counts, lint checks, image size, compile time, and other build parameters can be tracked over time, are more visible, and are therefore easier to improve.
The overall goal is to reduce tree breakage and provide a platform to run tests or code-quality checks that are too annoying or pedantic for any human to waste their time with. Developers get immediate (and potentially public) feedback about their changes, encouraging them to be more careful about testing before checkin.
Features:
- run builds on a variety of worker platforms
- arbitrary build process: handles projects using C, Python, whatever
- minimal host requirements: Python and Twisted
- workers can be behind a firewall if they can still do checkout
- status delivery through web page, email, IRC, other protocols
- track builds in progress, provide estimated completion time
- flexible configuration by subclassing generic build process classes
- debug tools to force a new build, submit fake
Change
s, query worker status - released under the GPL
2.1.1. History and Philosophy¶
The Buildbot was inspired by a similar project built for a development team writing a cross-platform embedded system.
The various components of the project were supposed to compile and run on several flavors of unix (linux, solaris, BSD), but individual developers had their own preferences and tended to stick to a single platform.
From time to time, incompatibilities would sneak in (some unix platforms want to use string.h
, some prefer strings.h
), and then the tree would compile for some developers but not others.
The Buildbot was written to automate the human process of walking into the office, updating a tree, compiling (and discovering the breakage), finding the developer at fault, and complaining to them about the problem they had introduced.
With multiple platforms it was difficult for developers to do the right thing (compile their potential change on all platforms); the Buildbot offered a way to help.
Another problem was when programmers would change the behavior of a library without warning its users, or change internal aspects that other code was (unfortunately) depending upon. Adding unit tests to the codebase helps here: if an application’s unit tests pass despite changes in the libraries it uses, you can have more confidence that the library changes haven’t broken anything. Many developers complained that the unit tests were inconvenient or took too long to run: having the Buildbot run them reduces the developer’s workload to a minimum.
In general, having more visibility into the project is always good, and automation makes it easier for developers to do the right thing. When everyone can see the status of the project, developers are encouraged to keep the tree in good working order. Unit tests that aren’t run on a regular basis tend to suffer from bitrot just like code does: exercising them on a regular basis helps to keep them functioning and useful.
The current version of the Buildbot is additionally targeted at distributed free-software projects, where resources and platforms are only available when provided by interested volunteers. The workers are designed to require an absolute minimum of configuration, reducing the effort a potential volunteer needs to expend to be able to contribute a new test environment to the project. The goal is for anyone who wishes that a given project would run on their favorite platform should be able to offer that project a worker, running on that platform, where they can verify that their portability code works, and keeps working.
2.1.2. System Architecture¶
The Buildbot consists of a single buildmaster and one or more workers, connected in a star topology. The buildmaster makes all decisions about what, when, and how to build. It sends commands to be run on the workers, which simply execute the commands and return the results. (certain steps involve more local decision making, where the overhead of sending a lot of commands back and forth would be inappropriate, but in general the buildmaster is responsible for everything).
The buildmaster is usually fed Change
s by some sort of version control system (Change Sources and Changes), which may cause builds to be run.
As the builds are performed, various status messages are produced, which are then sent to any registered Reporters.
The buildmaster is configured and maintained by the buildmaster admin, who is generally the project team member responsible for build process issues. Each worker is maintained by a worker admin, who do not need to be quite as involved. Generally workers are run by anyone who has an interest in seeing the project work well on their favorite platform.
2.1.2.1. Worker Connections¶
The workers are typically run on a variety of separate machines, at least one per platform of interest. These machines connect to the buildmaster over a TCP connection to a publicly-visible port. As a result, the workers can live behind a NAT box or similar firewalls, as long as they can get to buildmaster. The TCP connections are initiated by the worker and accepted by the buildmaster, but commands and results travel both ways within this connection. The buildmaster is always in charge, so all commands travel exclusively from the buildmaster to the worker.
To perform builds, the workers must typically obtain source code from a CVS/SVN/etc repository. Therefore they must also be able to reach the repository. The buildmaster provides instructions for performing builds, but does not provide the source code itself.
2.1.2.2. Buildmaster Architecture¶
The buildmaster consists of several pieces:
- Change Sources
- Which create a Change object each time something is modified in the VC repository.
Most
ChangeSource
s listen for messages from a hook script of some sort. Some sources actively poll the repository on a regular basis. AllChange
s are fed to the schedulers. - Schedulers
- Which decide when builds should be performed.
They collect
Change
s intoBuildRequest
s, which are then queued for delivery toBuilders
until a worker is available. - Builders
- Which control exactly how each build is performed (with a series of
BuildStep
s, configured in aBuildFactory
). EachBuild
is run on a single worker. - Status plugins
- Which deliver information about the build results through protocols like HTTP, mail, and IRC.
Each Builder
is configured with a list of Worker
s that it will use for its builds.
These workers are expected to behave identically: the only reason to use multiple Worker
s for a single Builder
is to provide a measure of load-balancing.
Within a single Worker
, each Builder
creates its own WorkerForBuilder
instance.
These WorkerForBuilder
s operate independently from each other.
Each gets its own base directory to work in.
It is quite common to have many Builder
s sharing the same worker.
For example, there might be two workers: one for i386, and a second for PowerPC.
There may then be a pair of Builder
s that do a full compile/test run, one for each architecture, and a lone Builder
that creates snapshot source tarballs if the full builders complete successfully.
The full builders would each run on a single worker, whereas the tarball creation step might run on either worker (since the platform doesn’t matter when creating source tarballs).
In this case, the mapping would look like:
Builder(full-i386) -> Workers(worker-i386)
Builder(full-ppc) -> Workers(worker-ppc)
Builder(source-tarball) -> Workers(worker-i386, worker-ppc)
and each Worker
would have two WorkerForBuilder
s inside it, one for a full builder, and a second for the source-tarball builder.
Once a WorkerForBuilder
is available, the Builder
pulls one or more BuildRequest
s off its incoming queue.
(It may pull more than one if it determines that it can merge the requests together; for example, there may be multiple requests to build the current HEAD revision).
These requests are merged into a single Build
instance, which includes the SourceStamp
that describes what exact version of the source code should be used for the build.
The Build
is then randomly assigned to a free WorkerForBuilder
and the build begins.
The behaviour when BuildRequest
s are merged can be customized, Collapsing Build Requests.
2.1.2.3. Status Delivery Architecture¶
The buildmaster maintains a central Status
object, to which various status plugins are connected.
Through this Status
object, a full hierarchy of build status objects can be obtained.
The configuration file controls which status plugins are active.
Each status plugin gets a reference to the top-level Status
object.
From there they can request information on each Builder
, Build
, Step
, and LogFile
.
This query-on-demand interface is used by the html.Waterfall
plugin to create the main status page each time a web browser hits the main URL.
The status plugins can also subscribe to hear about new Build
s as they occur: this is used by the MailNotifier
to create new email messages for each recently-completed Build
.
The Status
object records the status of old builds on disk in the buildmaster’s base directory.
This allows it to return information about historical builds.
There are also status objects that correspond to Scheduler
s and Worker
s.
These allow status plugins to report information about upcoming builds, and the online/offline status of each worker.
2.1.3. Control Flow¶
A day in the life of the Buildbot:
- A developer commits some source code changes to the repository.
A hook script or commit trigger of some sort sends information about this change to the buildmaster through one of its configured Change Sources.
This notification might arrive via email, or over a network connection (either initiated by the buildmaster as it subscribes to changes, or by the commit trigger as it pushes
Change
s towards the buildmaster). TheChange
contains information about who made the change, what files were modified, which revision contains the change, and any checkin comments. - The buildmaster distributes this change to all of its configured schedulers.
Any
important
changes cause thetree-stable-timer
to be started, and theChange
is added to a list of those that will go into a newBuild
. When the timer expires, aBuild
is started on each of a set of configured Builders, all compiling/testing the same source code. Unless configured otherwise, allBuild
s run in parallel on the various workers. - The
Build
consists of a series ofStep
s. EachStep
causes some number of commands to be invoked on the remote worker associated with thatBuilder
. The first step is almost always to perform a checkout of the appropriate revision from the same VC system that produced theChange
. The rest generally perform a compile and run unit tests. As eachStep
runs, the worker reports back command output and return status to the buildmaster. - As the
Build
runs, status messages like “Build Started”, “Step Started”, “Build Finished”, etc, are published to a collection of Status Targets. One of these targets is usually the HTMLWaterfall
display, which shows a chronological list of events, and summarizes the results of the most recent build at the top of each column. Developers can periodically check this page to see how their changes have fared. If they see red, they know that they’ve made a mistake and need to fix it. If they see green, they know that they’ve done their duty and don’t need to worry about their change breaking anything. - If a
MailNotifier
status target is active, the completion of a build will cause email to be sent to any developers whoseChange
s were incorporated into thisBuild
. TheMailNotifier
can be configured to only send mail upon failing builds, or for builds which have just transitioned from passing to failing. Other status targets can provide similar real-time notification via different communication channels, like IRC.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.2. Installation¶
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.2.1. Buildbot Components¶
Buildbot is shipped in two components: the buildmaster (called buildbot
for legacy reasons) and the worker.
The worker component has far fewer requirements, and is more broadly compatible than the buildmaster.
You will need to carefully pick the environment in which to run your buildmaster, but the worker should be able to run just about anywhere.
It is possible to install the buildmaster and worker on the same system, although for anything but the smallest installation this arrangement will not be very efficient.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.2.2. Requirements¶
2.2.2.1. Common Requirements¶
At a bare minimum, you’ll need the following for both the buildmaster and a worker:
Python: https://www.python.org
Buildbot master works with Python-3.5+. Buildbot worker works with Python 2.7, or Python 3.5+.
Note
This should be a “normal” build of Python. Builds of Python with debugging enabled or other unusual build parameters are likely to cause incorrect behavior.
Twisted: http://twistedmatrix.com
Buildbot requires Twisted-17.9.0 or later on the master and the worker. In upcoming versions of Buildbot, a newer Twisted will also be required on the worker. As always, the most recent version is recommended.
Of course, your project’s build process will impose additional requirements on the workers. These hosts must have all the tools necessary to compile and test your project’s source code.
Windows Support¶
Buildbot - both master and worker - runs well natively on Windows. The worker runs well on Cygwin, but because of problems with SQLite on Cygwin, the master does not.
Buildbot’s windows testing is limited to the most recent Twisted and Python versions. For best results, use the most recent available versions of these libraries on Windows.
Pywin32: http://sourceforge.net/projects/pywin32/
Twisted requires PyWin32 in order to spawn processes on Windows.
2.2.2.2. Buildmaster Requirements¶
Note that all of these requirements aside from SQLite can easily be installed from the Python package repository, PyPI.
sqlite3: http://www.sqlite.org
Buildbot requires a database to store its state, and by default uses SQLite. Version 3.7.0 or higher is recommended, although Buildbot will run down to 3.6.16 – at the risk of “Database is locked” errors. The minimum version is 3.4.0, below which parallel database queries and schema introspection fail.
Please note that Python ships with sqlite3 by default since Python 2.6.
If you configure a different database engine, then SQLite is not required. however note that Buildbot’s own unit tests require SQLite.
Jinja2: http://jinja.pocoo.org/
Buildbot requires Jinja version 2.1 or higher.
Jinja2 is a general purpose templating language and is used by Buildbot to generate the HTML output.
SQLAlchemy: http://www.sqlalchemy.org/
Buildbot requires SQLAlchemy version 1.1.0 or higher. SQLAlchemy allows Buildbot to build database schemas and queries for a wide variety of database systems.
SQLAlchemy-Migrate: https://sqlalchemy-migrate.readthedocs.io/en/latest/
Buildbot requires SQLAlchemy-Migrate version 0.9.0 or higher. Buildbot uses SQLAlchemy-Migrate to manage schema upgrades from version to version.
Python-Dateutil: http://labix.org/python-dateutil
Buildbot requires Python-Dateutil in version 1.5 or higher (the last version to support Python-2.x). This is a small, pure-Python library.
Autobahn:
The master requires Autobahn version 0.16.0 or higher with Python 2.7.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.2.3. Installing the code¶
2.2.3.1. The Buildbot Packages¶
Buildbot comes in several parts: buildbot
(the buildmaster), buildbot-worker
(the worker), buildbot-www
, and several web plugins such as buildbot-waterfall-view
.
The worker and buildmaster can be installed individually or together.
The base web (buildbot.www
) and web plugins are required to run a master with a web interface (the common configuration).
2.2.3.2. Installation From PyPI¶
The preferred way to install Buildbot is using pip
.
For the master:
pip install buildbot
and for the worker:
pip install buildbot-worker
When using pip
to install instead of distribution specific package managers, e.g. via apt-get or ports, it is simpler to choose exactly which version one wants to use.
It may however be easier to install via distribution specific package mangers but note that they may provide an earlier version than what is available via pip
.
If you plan to use TLS or SSL in master configuration (e.g. to fetch resources over HTTPS using twisted.web.client
), you need to install Buildbot with tls
extras:
pip install buildbot[tls]
2.2.3.3. Installation From Tarballs¶
Buildbot master and buildbot-worker
are installed using the standard Python distutils process.
For either component, after unpacking the tarball, the process is:
python setup.py build
python setup.py install
where the install step may need to be done as root.
This will put the bulk of the code in somewhere like /usr/lib/pythonx.y/site-packages/buildbot
.
It will also install the buildbot command-line tool in /usr/bin/buildbot
.
If the environment variable $NO_INSTALL_REQS
is set to 1
, then setup.py
will not try to install Buildbot’s requirements.
This is usually only useful when building a Buildbot package.
To test this, shift to a different directory (like /tmp
), and run:
buildbot --version
# or
buildbot-worker --version
If it shows you the versions of Buildbot and Twisted, the install went ok.
If it says “no such command” or it gets an ImportError
when it tries to load the libraries, then something went wrong.
pydoc buildbot
is another useful diagnostic tool.
Windows users will find these files in other places.
You will need to make sure that Python can find the libraries, and will probably find it convenient to have buildbot on your PATH
.
2.2.3.4. Installation in a Virtualenv¶
If you cannot or do not wish to install the buildbot into a site-wide location like /usr
or /usr/local
, you can also install it into the account’s home directory or any other location using a tool like virtualenv.
2.2.3.5. Running Buildbot’s Tests (optional)¶
If you wish, you can run the buildbot unit test suite.
First, ensure you have the mock Python module installed from PyPI.
You must not be using a Python wheels packaged version of Buildbot or have specified the bdist_wheel command when building.
The test suite is not included with the PyPi packaged version.
This module is not required for ordinary Buildbot operation - only to run the tests.
Note that this is not the same as the Fedora mock
package!
You can check with
python -mmock
Then, run the tests:
PYTHONPATH=. trial buildbot.test
# or
PYTHONPATH=. trial buildbot_worker.test
Nothing should fail, although a few might be skipped.
If any of the tests fail for reasons other than a missing mock
, you should stop and investigate the cause before continuing the installation process, as it will probably be easier to track down the bug early.
In most cases, the problem is incorrectly installed Python modules or a badly configured PYTHONPATH
.
This may be a good time to contact the Buildbot developers for help.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.2.4. Buildmaster Setup¶
2.2.4.1. Creating a buildmaster¶
As you learned earlier (System Architecture), the buildmaster runs on a central host (usually one that is publicly visible, so everybody can check on the status of the project), and controls all aspects of the buildbot system
You will probably wish to create a separate user account for the buildmaster, perhaps named buildmaster
.
Do not run the buildmaster as root
!
You need to choose a directory for the buildmaster, called the basedir
.
This directory will be owned by the buildmaster.
It will contain configuration, the database, and status information - including logfiles.
On a large buildmaster this directory will see a lot of activity, so it should be on a disk with adequate space and speed.
Once you’ve picked a directory, use the buildbot create-master
command to create the directory and populate it with startup files:
buildbot create-master -r basedir
You will need to create a configuration file before starting the buildmaster.
Most of the rest of this manual is dedicated to explaining how to do this.
A sample configuration file is placed in the working directory, named master.cfg.sample
, which can be copied to master.cfg
and edited to suit your purposes.
(Internal details: This command creates a file named buildbot.tac
that contains all the state necessary to create the buildmaster.
Twisted has a tool called twistd
which can use this .tac file to create and launch a buildmaster instance.
Twistd takes care of logging and daemonization (running the program in the background).
/usr/bin/buildbot
is a front end which runs twistd for you.)
Your master will need a database to store the various information about your builds, and its configuration.
By default, the sqlite3
backend will be used.
This needs no configuration, neither extra software.
All information will be stored in the file state.sqlite
.
Buildbot however supports multiple backends.
See Using A Database Server for more options.
Buildmaster Options¶
This section lists options to the create-master
command.
You can also type buildbot create-master --help
for an up-to-the-moment summary.
-
--force
¶
This option will allow to re-use an existing directory.
-
--no-logrotate
¶
This disables internal worker log management mechanism. With this option worker does not override the default logfile name and its behaviour giving a possibility to control those with command-line options of twistd daemon.
-
--relocatable
¶
This creates a “relocatable”
buildbot.tac
, which uses relative paths instead of absolute paths, so that the buildmaster directory can be moved about.
-
--config
¶
The name of the configuration file to use. This configuration file need not reside in the buildmaster directory.
-
--log-size
¶
This is the size in bytes when to rotate the Twisted log files. The default is 10MiB.
-
--log-count
¶
This is the number of log rotations to keep around. You can either specify a number or
None
to keep alltwistd.log
files around. The default is 10.
-
--db
¶
The database that the Buildmaster should use. Note that the same value must be added to the configuration file.
2.2.4.2. Upgrading an Existing Buildmaster¶
If you have just installed a new version of the Buildbot code, and you have buildmasters that were created using an older version, you’ll need to upgrade these buildmasters before you can use them. The upgrade process adds and modifies files in the buildmaster’s base directory to make it compatible with the new code.
buildbot upgrade-master basedir
This command will also scan your master.cfg
file for incompatibilities (by loading it and printing any errors or deprecation warnings that occur).
Each buildbot release tries to be compatible with configurations that worked cleanly (i.e. without deprecation warnings) on the previous release: any functions or classes that are to be removed will first be deprecated in a release, to give you a chance to start using the replacement.
The upgrade-master
command is idempotent.
It is safe to run it multiple times.
After each upgrade of the Buildbot code, you should use upgrade-master
on all your buildmasters.
Warning
The upgrade-master
command may perform database schema modifications.
To avoid any data loss or corruption, it should not be interrupted.
As a safeguard, it ignores all signals except SIGKILL
.
In general, Buildbot workers and masters can be upgraded independently, although some new features will not be available, depending on the master and worker versions.
Beyond this general information, read all of the sections below that apply to versions through which you are upgrading.
Version-specific Notes¶
See Upgrading from Buildbot 0.8.x for a guide to upgrading from 0.8.x to 0.9.x
The 0.7.6 release introduced the public_html/
directory, which contains index.html
and other files served by the WebStatus
and Waterfall
status displays.
The upgrade-master
command will create these files if they do not already exist.
It will not modify existing copies, but it will write a new copy in e.g. index.html.new
if the new version differs from the version that already exists.
Buildbot-0.8.0 introduces a database backend, which is SQLite by default.
The upgrade-master
command will automatically create and populate this database with the changes the buildmaster has seen.
Note that, as of this release, build history is not contained in the database, and is thus not migrated.
If you are not using sqlite, you will need to add an entry into your master.cfg
to reflect the database version you are using.
The upgrade process does not edit your master.cfg
for you.
So something like:
# for using mysql:
c['db_url'] = 'mysql://bbuser:<password>@localhost/buildbot'
Once the parameter has been added, invoke upgrade-master
.
This will extract the DB url from your configuration file.
buildbot upgrade-master
See Database Specification for more options to specify a database.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.2.5. Worker Setup¶
2.2.5.1. Creating a worker¶
Typically, you will be adding a worker to an existing buildmaster, to provide additional architecture coverage. The Buildbot administrator will give you several pieces of information necessary to connect to the buildmaster. You should also be somewhat familiar with the project being tested, so you can troubleshoot build problems locally.
The Buildbot exists to make sure that the project’s stated how to build it
process actually works.
To this end, the worker should run in an environment just like that of your regular developers.
Typically the project build process is documented somewhere (README
, INSTALL
, etc), in a document that should mention all library dependencies and contain a basic set of build instructions.
This document will be useful as you configure the host and account in which the worker runs.
Here’s a good checklist for setting up a worker:
- Set up the account
It is recommended (although not mandatory) to set up a separate user account for the worker. This account is frequently namedbuildbot
orworker
. This serves to isolate your personal working environment from that of the worker’s, and helps to minimize the security threat posed by letting possibly-unknown contributors run arbitrary code on your system. The account should have a minimum of fancy init scripts.
- Install the Buildbot code
Follow the instructions given earlier (Installing the code). If you use a separate worker account, and you didn’t install the Buildbot code to a shared location, then you will need to install it with--home=~
for each account that needs it.
- Set up the host
Make sure the host can actually reach the buildmaster. Usually the buildmaster is running a status webserver on the same machine, so simply point your web browser at it and see if you can get there. Install whatever additional packages or libraries the project’s INSTALL document advises. (or not: if your worker is supposed to make sure that building without optional libraries still works, then don’t install those libraries.)
Again, these libraries don’t necessarily have to be installed to a site-wide shared location, but they must be available to your build process. Accomplishing this is usually very specific to the build process, so installing them to
/usr
or/usr/local
is usually the best approach.
- Test the build process
Follow the instructions in theINSTALL
document, in the worker’s account. Perform a full CVS (or whatever) checkout, configure, make, run tests, etc. Confirm that the build works without manual fussing. If it doesn’t work when you do it by hand, it will be unlikely to work when the Buildbot attempts to do it in an automated fashion.
- Choose a base directory
This should be somewhere in the worker’s account, typically named after the project which is being tested. The worker will not touch any file outside of this directory. Something like~/Buildbot
or~/Workers/fooproject
is appropriate.
- Get the buildmaster host/port, botname, and password
When the Buildbot admin configures the buildmaster to accept and use your worker, they will provide you with the following pieces of information:
- your worker’s name
- the password assigned to your worker
- the hostname and port number of the buildmaster, i.e. http://buildbot.example.org:8007
- Create the worker
Now run the ‘worker’ command as follows:
buildbot-worker create-worker BASEDIR MASTERHOST:PORT WORKERNAME PASSWORD
This will create the base directory and a collection of files inside, including the
buildbot.tac
file that contains all the information you passed to the buildbot command.
- Fill in the hostinfo files
When it first connects, the worker will send a few files up to the buildmaster which describe the host that it is running on. These files are presented on the web status display so that developers have more information to reproduce any test failures that are witnessed by the Buildbot. There are sample files in the
info
subdirectory of the Buildbot’s base directory. You should edit these to correctly describe you and your host.
BASEDIR/info/admin
should contain your name and email address. This is theworker admin address
, and will be visible from the build status page (so you may wish to munge it a bit if address-harvesting spambots are a concern).
BASEDIR/info/host
should be filled with a brief description of the host: OS, version, memory size, CPU speed, versions of relevant libraries installed, and finally the version of the Buildbot code which is running the worker.The optional
BASEDIR/info/access_uri
can specify a URI which will connect a user to the machine. Many systems acceptssh://hostname
URIs for this purpose.If you run many workers, you may want to create a single
~worker/info
file and share it among all the workers with symlinks.
Worker Options¶
There are a handful of options you might want to use when creating the worker with the buildbot-worker create-worker <options> DIR <params>
command.
You can type buildbot-worker create-worker --help
for a summary.
To use these, just include them on the buildbot-worker create-worker
command line, like this
buildbot-worker create-worker --umask=0o22 ~/worker buildmaster.example.org:42012 {myworkername} {mypasswd}
-
--no-logrotate
¶
This disables internal worker log management mechanism. With this option worker does not override the default logfile name and its behaviour giving a possibility to control those with command-line options of twistd daemon.
-
--umask
¶
This is a string (generally an octal representation of an integer) which will cause the worker process’
umask
value to be set shortly after initialization. Thetwistd
daemonization utility forces the umask to 077 at startup (which means that all files created by the worker or its child processes will be unreadable by any user other than the worker account). If you want build products to be readable by other accounts, you can add--umask=0o22
to tell the worker to fix the umask after twistd clobbers it. If you want build products to be writable by other accounts too, use--umask=0o000
, but this is likely to be a security problem.
-
--keepalive
¶
This is a number that indicates how frequently
keepalive
messages should be sent from the worker to the buildmaster, expressed in seconds. The default (600) causes a message to be sent to the buildmaster at least once every 10 minutes. To set this to a lower value, use e.g.--keepalive=120
.If the worker is behind a NAT box or stateful firewall, these messages may help to keep the connection alive: some NAT boxes tend to forget about a connection if it has not been used in a while. When this happens, the buildmaster will think that the worker has disappeared, and builds will time out. Meanwhile the worker will not realize than anything is wrong.
-
--maxdelay
¶
This is a number that indicates the maximum amount of time the worker will wait between connection attempts, expressed in seconds. The default (300) causes the worker to wait at most 5 minutes before trying to connect to the buildmaster again.
-
--maxretries
¶
This is a number that indicates the maximum number of time the worker will make connection attempts. After that amount, the worker process will stop. This option is useful for Latent Workers to avoid consuming resources in case of misconfiguration or master failure.
For VM based latent workers, the user is responsible for halting the system when Buildbot worker has exited. This feature is heavily OS dependent, and cannot be managed by Buildbot worker. For example with systemd, one can add
ExecStopPost=shutdown now
to the Buildbot worker service unit configuration.
-
--log-size
¶
This is the size in bytes when to rotate the Twisted log files.
-
--log-count
¶
This is the number of log rotations to keep around. You can either specify a number or
None
to keep alltwistd.log
files around. The default is 10.
-
--allow-shutdown
¶
Can also be passed directly to the Worker constructor in
buildbot.tac
. If set, it allows the worker to initiate a graceful shutdown, meaning that it will ask the master to shut down the worker when the current build, if any, is complete.Setting allow_shutdown to
file
will cause the worker to watchshutdown.stamp
in basedir for updates to its mtime. When the mtime changes, the worker will request a graceful shutdown from the master. The file does not need to exist prior to starting the worker.Setting allow_shutdown to
signal
will set up a SIGHUP handler to start a graceful shutdown. When the signal is received, the worker will request a graceful shutdown from the master.The default value is
None
, in which case this feature will be disabled.Both master and worker must be at least version 0.8.3 for this feature to work.
Other Worker Configuration¶
unicode_encoding
This represents the encoding that Buildbot should use when converting unicode commandline arguments into byte strings in order to pass to the operating system when spawning new processes.
The default value is what Python’s
sys.getfilesystemencoding
returns, which on Windows is ‘mbcs’, on Mac OSX is ‘utf-8’, and on Unix depends on your locale settings.If you need a different encoding, this can be changed in your worker’s
buildbot.tac
file by adding aunicode_encoding
argument to the Worker constructor.
s = Worker(buildmaster_host, port, workername, passwd, basedir,
keepalive, usepty, umask=umask, maxdelay=maxdelay,
unicode_encoding='utf-8', allow_shutdown='signal')
Worker TLS Configuration¶
connection_string
For TLS connections to the master the
connection_string
-argument must be used toWorker.__init__
function.buildmaster_host
andport
must then beNone
.connection_string
will be used to create a client endpoint with clientFromString. An example ofconnection_string
is"TLS:buildbot-master.com:9989"
.See more about how to formulate the connection string in ConnectionStrings.
Example TLS connection string:
s = Worker(None, None, workername, passwd, basedir, keepalive, connection_string='TLS:buildbot-master.com:9989')
Make sure the worker trusts the masters certificate. If you have an non-authoritative certificate (CA is self-signed) the trustRoot parameter can be used.
s = Worker(None, None, workername, passwd, basedir, keepalive, connection_string= 'TLS:buildbot-master.com:9989:trustRoots=/dir-with-ca-certs')
It must point to a directory with PEM-encoded certificates in files with file ending .pem. For example:
$ cat /dir-with-ca-certs/ca.pem -----BEGIN CERTIFICATE----- MIIE9DCCA9ygAwIBAgIJALEqLrC/m1w3MA0GCSqGSIb3DQEBCwUAMIGsMQswCQYD VQQGEwJaWjELMAkGA1UECBMCUUExEDAOBgNVBAcTB05vd2hlcmUxETAPBgNVBAoT CEJ1aWxkYm90MRkwFwYDVQQLExBEZXZlbG9wbWVudCBUZWFtMRQwEgYDVQQDEwtC dWlsZGJvdCBDQTEQMA4GA1UEKRMHRWFzeVJTQTEoMCYGCSqGSIb3DQEJARYZYnVp bGRib3RAaW50ZWdyYXRpb24udGVzdDAeFw0xNjA5MDIxMjA5NTJaFw0yNjA4MzEx MjA5NTJaMIGsMQswCQYDVQQGEwJaWjELMAkGA1UECBMCUUExEDAOBgNVBAcTB05v d2hlcmUxETAPBgNVBAoTCEJ1aWxkYm90MRkwFwYDVQQLExBEZXZlbG9wbWVudCBU ZWFtMRQwEgYDVQQDEwtCdWlsZGJvdCBDQTEQMA4GA1UEKRMHRWFzeVJTQTEoMCYG CSqGSIb3DQEJARYZYnVpbGRib3RAaW50ZWdyYXRpb24udGVzdDCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBALJZcC9j4XYBi1fYT/fibY2FRWn6Qh74b1Pg I7iIde6Sf3DPdh/ogYvZAT+cIlkZdo4v326d0EkuYKcywDvho8UeET6sIYhuHPDW lRl1Ret6ylxpbEfxFNvMoEGNhYAP0C6QS2eWEP9LkV2lCuMQtWWzdedjk+efqBjR Gozaim0lr/5lx7bnVx0oRLAgbI5/9Ukbopansfr+Cp9CpFpbNPGZSmELzC3FPKXK 5tycj8WEqlywlha2/VRnCZfYefB3aAuQqQilLh+QHyhn6hzc26+n5B0l8QvrMkOX atKdznMLzJWGxS7UwmDKcsolcMAW+82BZ8nUCBPF3U5PkTLO540CAwEAAaOCARUw ggERMB0GA1UdDgQWBBT7A/I+MZ1sFFJ9jikYkn51Q3wJ+TCB4QYDVR0jBIHZMIHW gBT7A/I+MZ1sFFJ9jikYkn51Q3wJ+aGBsqSBrzCBrDELMAkGA1UEBhMCWloxCzAJ BgNVBAgTAlFBMRAwDgYDVQQHEwdOb3doZXJlMREwDwYDVQQKEwhCdWlsZGJvdDEZ MBcGA1UECxMQRGV2ZWxvcG1lbnQgVGVhbTEUMBIGA1UEAxMLQnVpbGRib3QgQ0Ex EDAOBgNVBCkTB0Vhc3lSU0ExKDAmBgkqhkiG9w0BCQEWGWJ1aWxkYm90QGludGVn cmF0aW9uLnRlc3SCCQCxKi6wv5tcNzAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEB CwUAA4IBAQCJGJVMAmwZRK/mRqm9E0e3s4YGmYT2jwX5IX17XljEy+1cS4huuZW2 33CFpslkT1MN/r8IIZWilxT/lTujHyt4eERGjE1oRVKU8rlTH8WUjFzPIVu7nkte 09abqynAoec8aQukg79NRCY1l/E2/WzfnUt3yTgKPfZmzoiN0K+hH4gVlWtrizPA LaGwoslYYTA6jHNEeMm8OQLNf17OTmAa7EpeIgVpLRCieI9S3JIG4WYU8fVkeuiU cB439SdixU4cecVjNfFDpq6JM8N6+DQoYOSNRt9Dy0ioGyx5D4lWoIQ+BmXQENal gw+XLyejeNTNgLOxf9pbNYMJqxhkTkoE -----END CERTIFICATE-----
Using TCP in
connection_string
is the equivalent as using thebuildmaster_host
andport
arguments.s = Worker(None, None, workername, passwd, basedir, keepalive connection_string='TCP:buildbot-master.com:9989')
is equivalent to
s = Worker('buildbot-master.com', 9989, workername, passwd, basedir, keepalive)
2.2.5.2. Upgrading an Existing Worker¶
Version-specific Notes¶
During project lifetime worker has transitioned over few states:
- Before Buildbot version 0.8.1 worker were integral part of
buildbot
package distribution. - Starting from Buildbot version 0.8.1 worker were extracted from
buildbot
package tobuildbot-slave
package. - Starting from Buildbot version 0.9.0 the
buildbot-slave
package was renamed tobuildbot-worker
.
Before Buildbot version 0.8.1, the Buildbot master and worker were part of the same distribution. As of version 0.8.1, the worker is a separate distribution.
As of this release, you will need to install buildbot-slave
to run a worker.
Any automatic startup scripts that had run buildbot start
for previous versions should be changed to run buildslave start
instead.
If you are running a version later than 0.8.1, then you can skip the remainder of this section: the upgrade-slave
command will take care of this.
If you are upgrading directly to 0.8.1, read on.
The existing buildbot.tac
for any workers running older versions will need to be edited or replaced.
If the loss of cached worker state (e.g., for Source steps in copy mode) is not problematic, the easiest solution is to simply delete the worker directory and re-run buildslave create-slave
.
If deleting the worker directory is problematic, the change to buildbot.tac
is simple.
On line 3, replace:
from buildbot.slave.bot import BuildSlave
with:
from buildslave.bot import BuildSlave
After this change, the worker should start as usual.
0.8.*
version of buildbot-slave¶If you have just installed a new version of Buildbot-slave, you may need to take some steps to upgrade it. If you are upgrading to version 0.8.2 or later, you can run
buildslave upgrade-slave /path/to/worker/dir
buildbot-slave
to buildbot-worker
¶If the loss of cached worker state (e.g., for Source steps in copy mode) is not problematic, the easiest solution is to simply delete the worker directory and re-run buildbot-worker create-worker
.
If deleting the worker directory is problematic, you can change buildbot.tac
in the following way:
Replace:
from buildslave.bot import BuildSlave
with:
from buildbot_worker.bot import Worker
Replace:
application = service.Application('buildslave')
with:
application = service.Application('buildbot-worker')
Replace:
s = BuildSlave(buildmaster_host, port, slavename, passwd, basedir, keepalive, usepty, umask=umask, maxdelay=maxdelay, numcpus=numcpus, allow_shutdown=allow_shutdown)
with:
s = Worker(buildmaster_host, port, slavename, passwd, basedir, keepalive, umask=umask, maxdelay=maxdelay, numcpus=numcpus, allow_shutdown=allow_shutdown)
See Transition to “Worker” Terminology for details of changes in version Buildbot 0.9.0
.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.2.6. Next Steps¶
2.2.6.1. Launching the daemons¶
Both the buildmaster and the worker run as daemon programs. To launch them, pass the working directory to the buildbot and buildbot-worker commands, as appropriate:
# start a master
buildbot start [ BASEDIR ]
# start a worker
buildbot-worker start [ WORKER_BASEDIR ]
The BASEDIR is option and can be omitted if the current directory contains the buildbot configuration (the buildbot.tac
file).
buildbot start
This command will start the daemon and then return, so normally it will not produce any output.
To verify that the programs are indeed running, look for a pair of files named twistd.log
and twistd.pid
that should be created in the working directory.
twistd.pid
contains the process ID of the newly-spawned daemon.
When the worker connects to the buildmaster, new directories will start appearing in its base directory. The buildmaster tells the worker to create a directory for each Builder which will be using that worker. All build operations are performed within these directories: CVS checkouts, compiles, and tests.
Once you get everything running, you will want to arrange for the buildbot daemons to be started at boot time.
One way is to use cron, by putting them in a @reboot
crontab entry [1]
@reboot buildbot start [ BASEDIR ]
When you run crontab to set this up, remember to do it as the buildmaster or worker account! If you add this to your crontab when running as your regular account (or worse yet, root), then the daemon will run as the wrong user, quite possibly as one with more authority than you intended to provide.
It is important to remember that the environment provided to cron jobs and init scripts can be quite different than your normal runtime.
There may be fewer environment variables specified, and the PATH
may be shorter than usual.
It is a good idea to test out this method of launching the worker by using a cron job with a time in the near future, with the same command, and then check twistd.log
to make sure the worker actually started correctly.
Common problems here are for /usr/local
or ~/bin
to not be on your PATH
, or for PYTHONPATH
to not be set correctly.
Sometimes HOME
is messed up too. If using systemd to launch buildbot-worker it may be a good idea to specify a fixed PATH
using the Environment
directive,
see systemd unit file example
Some distributions may include conveniences to make starting buildbot at boot time easy.
For instance, with the default buildbot package in Debian-based distributions, you may only need to modify /etc/default/buildbot
(see also /etc/init.d/buildbot
, which reads the configuration in /etc/default/buildbot
).
Buildbot also comes with its own init scripts that provide support for controlling multi-worker and multi-master setups (mostly because they are based on the init script from the Debian package). With a little modification these scripts can be used both on Debian and RHEL-based distributions and may thus prove helpful to package maintainers who are working on buildbot (or those that haven’t yet split buildbot into master and worker packages).
# install as /etc/default/buildbot-worker
# or /etc/sysconfig/buildbot-worker
worker/contrib/init-scripts/buildbot-worker.default
# install as /etc/default/buildmaster
# or /etc/sysconfig/buildmaster
master/contrib/init-scripts/buildmaster.default
# install as /etc/init.d/buildbot-worker
worker/contrib/init-scripts/buildbot-worker.init.sh
# install as /etc/init.d/buildmaster
master/contrib/init-scripts/buildmaster.init.sh
# ... and tell sysvinit about them
chkconfig buildmaster reset
# ... or
update-rc.d buildmaster defaults
2.2.6.2. Launching worker as Windows service¶
You can find information about installation of Buildbot as Windows service here RunningBuildbotOnWindows. Recent version of Buildbot worker has simplified configuration for Windows service.
buildbot_worker_windows_service.exe --user YOURDOMAIN\theusername --password thepassword --startup auto install
automatically adds user rights to run Buildbot as service.
2.2.6.3. Logfiles¶
While a buildbot daemon runs, it emits text to a logfile, named twistd.log
.
A command like tail -f twistd.log
is useful to watch the command output as it runs.
The buildmaster will announce any errors with its configuration file in the logfile, so it is a good idea to look at the log at startup time to check for any problems. Most buildmaster activities will cause lines to be added to the log.
2.2.6.4. Shutdown¶
To stop a buildmaster or worker manually, use:
buildbot stop [ BASEDIR ]
# or
buildbot-worker stop [ WORKER_BASEDIR ]
This simply looks for the twistd.pid
file and kills whatever process is identified within.
At system shutdown, all processes are sent a SIGKILL
.
The buildmaster and worker will respond to this by shutting down normally.
The buildmaster will respond to a SIGHUP
by re-reading its config file.
Of course, this only works on Unix-like systems with signal support, and won’t work on Windows.
The following shortcut is available:
buildbot reconfig [ BASEDIR ]
When you update the Buildbot code to a new release, you will need to restart the buildmaster and/or worker before it can take advantage of the new code.
You can do a buildbot stop BASEDIR
and buildbot start BASEDIR
in quick succession, or you can use the restart
shortcut, which does both steps for you:
buildbot restart [ BASEDIR ]
Workers can similarly be restarted with:
buildbot-worker restart [ BASEDIR ]
There are certain configuration changes that are not handled cleanly by buildbot reconfig
.
If this occurs, buildbot restart
is a more robust tool to fully switch over to the new configuration.
buildbot restart
may also be used to start a stopped Buildbot instance.
This behaviour is useful when writing scripts that stop, start and restart Buildbot.
A worker may also be gracefully shutdown from the web UI. This is useful to shutdown a worker without interrupting any current builds. The buildmaster will wait until the worker has finished all its current builds, and will then tell the worker to shutdown.
[1] | This @reboot syntax is understood by Vixie cron, which is the flavor usually provided with Linux systems.
Other unices may have a cron that doesn’t understand @reboot |
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.3. Concepts¶
This chapter defines some of the basic concepts that the Buildbot uses. You’ll need to understand how the Buildbot sees the world to configure it properly.
2.3.1. Source Stamps¶
Buildbot uses the concept of source stamp set to identify exact source code that needs to be built for a certain project. A source stamp set is a collection of one or more source stamps.
A source stamp is a collection of information needed to identify a particular version of code on a certain codebase. This information most often is a revision and possibly a branch.
A codebase is a collection of related files and their history tracked as a unit by version control systems.
A single codebase may appear in multiple repositories which themselves are identified by URLs.
For example, https://github.com/mozilla/mozilla-central
and http://hg.mozilla.org/mozilla-release
both contain the Firefox codebase, although not exactly the same code.
A project corresponds to a set of one or more codebases that together may be built and produce some end artifact. For example, a company may build several applications based on the same core library. The “app” codebase and the “core” codebase are in separate repositories, but are compiled together and constitute a single project. Changes to either codebase should cause a rebuild of the application.
A revision is an identifier used by most version control systems to uniquely specify a particular version of the source code. Sometimes in order to do that a revision may make sense only if used in combination with a branch.
To sum up the above, to build a project, Buildbot needs to know exactly which version of each codebase it should build. It uses a source stamp to do so for each codebase, each of which informs Buildbot that it should use a specific revision from that codebase. Collectively these source stamps are called source stamp set for each project.
2.3.2. Version Control Systems¶
Buildbot supports a significant number of version control systems, so it treats them abstractly.
For purposes of deciding when to perform builds, Buildbot’s change sources monitor repositories, and represent any updates to those repositories as changes. These change sources fall broadly into two categories: pollers which periodically check the repository for updates; and hooks, where the repository is configured to notify Buildbot whenever an update occurs. For more information see Change Sources and Changes and How Different VC Systems Specify Sources.
When it comes time to actually perform a build, a scheduler prepares a source stamp set, as described above, based on its configuration. When the build begins, one or more source steps use the information in the source stamp set to actually check out the source code, using the normal VCS commands.
2.3.3. Changes¶
A Change is an abstract way Buildbot uses to represent a single change to the source files performed by a developer. In version control systems that support the notion of atomic check-ins a change represents a changeset or commit.
A Change
comprises the following information:
- the developer that is responsible for the change
- the list of files that the change added, removed or modified
- the message of the commit
- the repository, the codebase and the project that the change corresponds to
- the revision and the branch of the commit
2.3.4. Scheduling Builds¶
Each Buildmaster has a set of scheduler objects, each of which gets a copy of every incoming Change
.
The Schedulers are responsible for deciding when Build
s should be run.
Some Buildbot installations might have a single scheduler, while others may have several, each for a different purpose.
For example, a quick scheduler might exist to give immediate feedback to developers, hoping to catch obvious problems in the code that can be detected quickly. These typically do not run the full test suite, nor do they run on a wide variety of platforms. They also usually do a VC update rather than performing a brand-new checkout each time.
A separate full scheduler might run more comprehensive tests, to catch more subtle problems.
It might be configured to run after the quick scheduler, to give developers time to commit fixes to bugs caught by the quick scheduler before running the comprehensive tests.
This scheduler would also feed multiple Builder
s.
Many schedulers can be configured to wait a while after seeing a source-code change - this is the tree stable timer. The timer allows multiple commits to be “batched” together. This is particularly useful in distributed version control systems, where a developer may push a long sequence of changes all at once. To save resources, it’s often desirable only to test the most recent change.
Schedulers can also filter out the changes they are interested in, based on a number of criteria. For example, a scheduler that only builds documentation might skip any changes that do not affect the documentation. Schedulers can also filter on the branch to which a commit was made.
There is some support for configuring dependencies between builds - for example, you may want to build packages only for revisions which pass all of the unit tests. This support is under active development in Buildbot, and is referred to as “build coordination”.
Periodic builds (those which are run every N seconds rather than after new Changes arrive) are triggered by a special Periodic
scheduler.
Each scheduler creates and submits BuildSet
objects to the BuildMaster
, which is then responsible for making sure the individual BuildRequests
are delivered to the target Builder
s.
Scheduler instances are activated by placing them in the schedulers
list in the buildmaster config file.
Each scheduler must have a unique name.
2.3.5. BuildSets¶
A BuildSet
is the name given to a set of Build
s that all compile/test the same version of the tree on multiple Builder
s.
In general, all these component Build
s will perform the same sequence of Step
s, using the same source code, but on different platforms or against a different set of libraries.
The BuildSet
is tracked as a single unit, which fails if any of the component Build
s have failed, and therefore can succeed only if all of the component Build
s have succeeded.
There are two kinds of status notification messages that can be emitted for a BuildSet
: the firstFailure
type (which fires as soon as we know the BuildSet
will fail), and the Finished
type (which fires once the BuildSet
has completely finished, regardless of whether the overall set passed or failed).
A BuildSet
is created with set of one or more source stamp tuples of (branch, revision, changes, patch)
, some of which may be None
, and a list of Builder
s on which it is to be run.
They are then given to the BuildMaster, which is responsible for creating a separate BuildRequest
for each Builder
.
There are a couple of different likely values for the SourceStamp
:
(revision=None, changes=CHANGES, patch=None)
- This is a
SourceStamp
used when a series ofChange
s have triggered a build. The VC step will attempt to check out a tree that contains CHANGES (and any changes that occurred before CHANGES, but not any that occurred after them.) (revision=None, changes=None, patch=None)
- This builds the most recent code on the default branch.
This is the sort of
SourceStamp
that would be used on aBuild
that was triggered by a user request, or aPeriodic
scheduler. It is also possible to configure the VC Source Step to always check out the latest sources rather than paying attention to theChange
s in theSourceStamp
, which will result in same behavior as this. (branch=BRANCH, revision=None, changes=None, patch=None)
- This builds the most recent code on the given BRANCH.
Again, this is generally triggered by a user request or a
Periodic
scheduler. (revision=REV, changes=None, patch=(LEVEL, DIFF, SUBDIR_ROOT))
- This checks out the tree at the given revision REV, then applies a patch (using
patch -pLEVEL <DIFF
) from inside the relative directory SUBDIR_ROOT. Item SUBDIR_ROOT is optional and defaults to the builder working directory. Thetry
command creates this kind ofSourceStamp
. Ifpatch
isNone
, the patching step is bypassed.
The buildmaster is responsible for turning the BuildSet
into a set of BuildRequest
objects and queueing them on the appropriate Builder
s.
2.3.6. BuildRequests¶
A BuildRequest
is a request to build a specific set of source code (specified by one or more source stamps) on a single Builder
.
Each Builder
runs the BuildRequest
as soon as it can (i.e. when an associated worker becomes free).
BuildRequest
s are prioritized from oldest to newest, so when a worker becomes free, the Builder
with the oldest BuildRequest
is run.
The BuildRequest
contains one SourceStamp
specification per codebase.
The actual process of running the build (the series of Step
s that will be executed) is implemented by the Build
object.
In the future this might be changed, to have the Build
define what gets built, and a separate BuildProcess
(provided by the Builder) to define how it gets built.
The BuildRequest
may be mergeable with other compatible BuildRequest
s.
Builds that are triggered by incoming Change
s will generally be mergeable.
Builds that are triggered by user requests are generally not, unless they are multiple requests to build the latest sources of the same branch.
A merge of buildrequests is performed per codebase, thus on changes having the same codebase.
2.3.7. Builders¶
A Builder
handles the process of scheduling work to workers.
Each Builder
is responsible for a certain type of build, which usually consist of identical or very similar sequence of steps.
The class serves as a kind of queue for that particular type of build.
In general, each Builder
runs independently, but it’s possible to constrain the behavior of Builder
s using various kinds of interlocks.
Each builder is a long-lived object which controls a sequence of Build
s.
A Builder
is created when the config file is first parsed, and lives forever (or rather until it is removed from the config file).
It mediates the connections to the workers that do all the work, and is responsible for creating the Build
objects - Builds.
Each builder gets a unique name, and the path name of a directory where it gets to do all its work. This path is used in two ways. On the buildmaster-side a directory is created for keeping status information. On the worker-side a directory is created where the actual checkout, compile and test commands are executed.
2.3.8. Build Factories¶
A builder also has a BuildFactory
, which is responsible for creating new Build
instances: because the Build
instance is what actually performs each build, choosing the BuildFactory
is the way to specify what happens each time a build is done (Builds).
2.3.9. Workers¶
A Worker
s corresponds to an environment where builds are executed.
A single physical machine must run at least one Worker
s in order for Buildbot to be able to utilize it for running builds.
Multiple Worker
s may run on a single machine to provide different environments that can reuse the same hardware by means of containers or virtual machines.
Each builder is associated with one or more Worker
s.
For example, a builder which is used to perform macOS builds (as opposed to Linux or Windows builds) should naturally be associated with a Mac worker.
If multiple workers are available for any given builder, you will have some measure of redundancy: in case one worker goes offline, the others can still keep the Builder
working.
In addition, multiple workers will allow multiple simultaneous builds for the same Builder
, which might be useful if you have a lot of forced or try
builds taking place.
Ideally, each Worker
that is configured for a builder should be identical.
Otherwise build or test failures will be dependent on which worker the build is ran and this will complicate investigation of failures.
2.3.10. Builds¶
A Build
is a single compile or test run of a particular version of the source code, and is comprised of a series of steps.
The steps may be arbitrary. For example, for compiled software a build generally consists of the checkout, configure, make, and make check sequence.
For interpreted projects like Python modules, a build is generally a checkout followed by an invocation of the bundled test suite.
A BuildFactory
describes the steps a build will perform.
The builder which starts a build uses its configured build factory to determine the build’s steps.
2.3.11. Users¶
Buildbot has a somewhat limited awareness of users. It assumes the world consists of a set of developers, each of whom can be described by a couple of simple attributes. These developers make changes to the source code, causing builds which may succeed or fail.
Users also may have different levels of authorization when issuing Buildbot commands, such as forcing a build from the web interface or from an IRC channel.
Each developer is primarily known through the source control system.
Each Change
object that arrives is tagged with a who
field that typically gives the account name (on the repository machine) of the user responsible for that change.
This string is displayed on the HTML status pages and in each Build
’s blamelist.
To do more with the User than just refer to them, this username needs to be mapped into an address of some sort. The responsibility for this mapping is left up to the status module which needs the address. In the future, the responsibility for managing users will be transferred to User Objects.
The who
fields in git
Changes are used to create User Objects, which allows for more control and flexibility in how Buildbot manages users.
2.3.11.1. User Objects¶
User Objects allow Buildbot to better manage users throughout its various interactions with users (see Change Sources and Changes and Reporters). The User Objects are stored in the Buildbot database and correlate the various attributes that a user might have: irc, Git, etc.
Changes¶
Incoming Changes all have a who
attribute attached to them that specifies which developer is responsible for that Change.
When a Change is first rendered, the who
attribute is parsed and added to the database if it doesn’t exist or checked against an existing user.
The who
attribute is formatted in different ways depending on the version control system that the Change came from.
git
who
attributes take the formFull Name <Email>
.svn
who
attributes are of the formUsername
.hg
who
attributes are free-form strings, but usually adhere to similar conventions asgit
attributes (Full Name <Email>
).cvs
who
attributes are of the formUsername
.darcs
who
attributes contain anEmail
and may also include aFull Name
likegit
attributes.bzr
who
attributes are free-form strings likehg
, and can include aUsername
,Email
, and/orFull Name
.
Tools¶
For managing users manually, use the buildbot user
command, which allows you to add, remove, update, and show various attributes of users in the Buildbot database (see Command-line Tool).
Uses¶
Correlating the various bits and pieces that Buildbot views as users also means that one attribute of a user can be translated into another. This provides a more complete view of users throughout Buildbot.
One such use is being able to find email addresses based on a set of Builds to notify users through the MailNotifier
.
This process is explained more clearly in Email Addresses.
Another way to utilize User Objects is through UsersAuth for web authentication.
To use UsersAuth, you need to set a bb_username and bb_password via the buildbot user
command line tool to check against.
The password will be encrypted before storing in the database along with other user attributes.
2.3.11.2. Doing Things With Users¶
Each change has a single user who is responsible for it. Most builds have a set of changes: the build generally represents the first time these changes have been built and tested by the Buildbot. The build has a blamelist that is the union of the users responsible for all the build’s changes. If the build was created by a Try Schedulers this list will include the submitter of the try job, if known.
The build provides a list of users who are interested in the build – the interested users. Usually this is equal to the blamelist, but may also be expanded, e.g., to include the current build sherrif or a module’s maintainer.
If desired, the buildbot can notify the interested users until the problem is resolved.
2.3.11.3. Email Addresses¶
The MailNotifier
is a status target which can send email about the results of each build.
It accepts a static list of email addresses to which each message should be delivered, but it can also be configured to send mail to the Build
’s Interested Users.
To do this, it needs a way to convert User names into email addresses.
For many VC systems, the User Name is actually an account name on the system which hosts the repository.
As such, turning the name into an email address is a simple matter of appending @repositoryhost.com
.
Some projects use other kinds of mappings (for example the preferred email address may be at project.org
despite the repository host being named cvs.project.org
), and some VC systems have full separation between the concept of a user and that of an account on the repository host (like Perforce).
Some systems (like Git) put a full contact email address in every change.
To convert these names to addresses, the MailNotifier
uses an EmailLookup
object.
This provides a getAddress
method which accepts a name and (eventually) returns an address.
The default MailNotifier
module provides an EmailLookup
which simply appends a static string, configurable when the notifier is created.
To create more complex behaviors (perhaps using an LDAP lookup, or using finger
on a central host to determine a preferred address for the developer), provide a different object as the lookup
argument.
If an EmailLookup object isn’t given to the MailNotifier, the MailNotifier will try to find emails through User Objects.
This will work the same as if an EmailLookup object was used if every user in the Build’s Interested Users list has an email in the database for them.
If a user whose change led to a Build doesn’t have an email attribute, that user will not receive an email.
If extraRecipients
is given, those users are still sent mail when the EmailLookup object is not specified.
In the future, when the Problem mechanism has been set up, the Buildbot will need to send mail to arbitrary Users.
It will do this by locating a MailNotifier
-like object among all the buildmaster’s status targets, and asking it to send messages to various Users.
This means the User-to-address mapping only has to be set up once, in your MailNotifier
, and every email message the buildbot emits will take advantage of it.
2.3.11.4. IRC Nicknames¶
Like MailNotifier
, the buildbot.status.words.IRC
class provides a status target which can announce the results of each build.
It also provides an interactive interface by responding to online queries posted in the channel or sent as private messages.
In the future, the buildbot can be configured to map User names to IRC nicknames, to watch for the recent presence of these nicknames, and to deliver build status messages to the interested parties.
Like MailNotifier
does for email addresses, the IRC
object will have an IRCLookup
which is responsible for nicknames.
The mapping can be set up statically, or it can be updated by online users themselves (by claiming a username with some kind of buildbot: i am user warner
commands).
Once the mapping is established, the rest of the buildbot can ask the IRC
object to send messages to various users.
It can report on the likelihood that the user saw the given message (based upon how long the user has been inactive on the channel), which might prompt the Problem Hassler logic to send them an email message instead.
These operations and authentication of commands issued by particular nicknames will be implemented in User Objects.
2.3.12. Build Properties¶
Each build has a set of Build Properties, which can be used by its build steps to modify their actions. These properties, in the form of key-value pairs, provide a general framework for dynamically altering the behavior of a build based on its circumstances.
Properties form a simple kind of variable in a build. Some properties are set when the build starts, and properties can be changed as a build progresses – properties set or changed in one step may be accessed in subsequent steps. Property values can be numbers, strings, lists, or dictionaries - basically, anything that can be represented in JSON.
Properties are very flexible, and can be used to implement all manner of functionality. Here are some examples:
Most Source steps record the revision that they checked out in the got_revision
property.
A later step could use this property to specify the name of a fully-built tarball, dropped in an easily-accessible directory for later testing.
Note
In builds with more than one codebase, the got_revision
property is a dictionary, keyed by codebase.
Some projects want to perform nightly builds as well as building in response to committed changes.
Such a project would run two schedulers, both pointing to the same set of builders, but could provide an is_nightly
property so that steps can distinguish the nightly builds, perhaps to run more resource-intensive tests.
Some projects have different build processes on different systems. Rather than create a build factory for each worker, the steps can use worker properties to identify the unique aspects of each worker and adapt the build process dynamically.
2.3.13. Multiple-Codebase Builds¶
What if an end-product is composed of code from several codebases? Changes may arrive from different repositories within the tree-stable-timer period. Buildbot will not only use the source-trees that contain changes but also needs the remaining source-trees to build the complete product.
For this reason a Scheduler can be configured to base a build on a set of several source-trees that can (partly) be overridden by the information from incoming Change
s.
As described above, the source for each codebase is identified by a source stamp, containing its repository, branch and revision. A full build set will specify a source stamp set describing the source to use for each codebase.
Configuring all of this takes a coordinated approach. A complete multiple repository configuration consists of:
a codebase generator
Every relevant change arriving from a VC must contain a codebase. This is done by acodebaseGenerator
that is defined in the configuration. Most generators examine the repository of a change to determine its codebase, using project-specific rules.
some schedulers
Eachscheduler
has to be configured with a set of all requiredcodebases
to build a product. These codebases indicate the set of required source-trees. In order for the scheduler to be able to produce a complete set for each build, the configuration can give a default repository, branch, and revision for each codebase. When a scheduler must generate a source stamp for a codebase that has received no changes, it applies these default values.
multiple source steps - one for each codebase
A Builders’s build factory must include a source step for each codebase. Each of the source steps has a
codebase
attribute which is used to select an appropriate source stamp from the source stamp set for a build. This information comes from the arrived changes or from the scheduler’s configured default values.Note
Each source step has to have its own
workdir
set in order for the checkout to be done for each codebase in its own directory.Note
Ensure you specify the codebase within your source step’s Interpolate() calls (ex.
http://.../svn/%(src:codebase:branch)s)
. See Interpolate for details.
Warning
Defining a codebaseGenerator
that returns non-empty (not ''
) codebases will change the behavior of all the schedulers.
2.3.14. Multimaster¶
Warning
Buildbot Multimaster is considered experimental. There are still some companies using it in production. Don’t hesitate to use the mailing lists to share your experience.
Buildbot supports interconnection of several masters. This has to be done through a multi-master enabled message queue backend. As of now the only one supported is wamp and crossbar.io. see wamp
There are then several strategy for introducing multimaster in your buildbot infra. A simple way to say it is by adding the concept of symmetrics and asymmetrics multimaster (like there is SMP and AMP for multi core CPUs)
Symmetric multimaster is when each master share the exact same configuration. They run the same builders, same schedulers, same everything, the only difference is that workers are connected evenly between the masters (by any means (e.g. DNS load balancing, etc)) Symmetric multimaster is good to use to scale buildbot horizontally.
Asymmetric multimaster is when each master have different configuration. Each master may have a specific responsibility (e.g schedulers, set of builder, UI). This was more how you did in 0.8, also because of its own technical limitations. A nice feature of asymmetric multimaster is that you can have the UI only handled by some masters.
Separating the UI from the controlling will greatly help in the performance of the UI, because badly written BuildSteps?? can stall the reactor for several seconds.
The fanciest configuration would probably be a symmetric configuration for everything but the UI. You would scale the number of UI master according to your number of UI users, and scale the number of engine masters to the number of workers.
Depending on your workload and size of master host, it is probably a good idea to start thinking of multimaster starting from a hundred workers connected.
Multimaster can also be used for high availability, and seamless upgrade of configuration code. Complex configuration indeed requires sometimes to restart the master to reload custom steps or code, or just to upgrade the upstream buildbot version.
In this case, you will implement following procedure:
- Start new master(s) with new code and configuration.
- Send a graceful shutdown to the old master(s).
- New master(s) will start taking the new jobs, while old master(s) will just finish managing the running builds.
- As an old master is finishing the running builds, it will drop the connections from the workers, who will then reconnect automatically, and by the mean of load balancer will get connected to a new master to run new jobs.
As buildbot nine has been designed to allow such procedure, it has not been implemented in production yet as we know. There is probably a new REST api needed in order to graceful shutdown a master, and the details of gracefully dropping the connection to the workers to be sorted out.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.4. Secret Management¶
2.4.1. Requirements¶
Buildbot steps might need secrets to execute their actions. Secrets are used to execute commands or to create authenticated network connections. Secrets may be a SSH key, a password, or a file content like a wgetrc file or a public SSH key. To preserve confidentiality, the secrets values must not be printed or logged in the twisted or steps logs. Secrets must not be stored in the Buildbot configuration (master.cfg), as the source code is usually shared in SCM like git.
2.4.2. How to use Buildbot Secret Management¶
2.4.2.1. Secrets and providers¶
Buildbot implements several providers for secrets retrieval:
- File system based: secrets are written in a file. This is a simple solution for example when secrets are managed by config management system like Ansible Vault.
- Third party backend based: secrets are stored by a specialized software. These solution are usually more secured.
Secrets providers are configured if needed in the master configuration. Multiple providers can be configured at once. The secret manager is a Buildbot service. The secret manager returns the specific provider results related to the providers registered in the configuration.
2.4.2.2. How to use secrets in Buildbot¶
Secret can be used in Buildbot via the IRenderable
mechanism.
Two IRenderable
actually implement secrets.
Interpolate can be used if you need to mix secrets and other interpolation in the same argument.
Interpolate can be used if your secret is directly used as a component argument.
As argument to steps¶
The following example shows a basic usage of secrets in Buildbot.
from buildbot.plugins import secrets, util
# First we declare that the secrets are stored in a directory of the filesystem
# each file contain one secret identified by the filename
c['secretsProviders'] = [secrets.SecretInAFile(dirname="/path/toSecretsFiles")]
# then in a buildfactory:
# use a secret on a shell command via Interpolate
f1.addStep(ShellCommand(util.Interpolate("wget -u user -p '%(secret:userpassword)s' '%(prop:urltofetch)s'")))
# .. or non shell form:
f1.addStep(ShellCommand(["wget", "-u", "user", "-p", util.Secret("userpassword"), util.Interpolate("%(prop:urltofetch)s")]))
Secrets are also interpolated in the build like properties are, and will be used in a command line for example.
As argument to services¶
You can use secrets to configure services. All services arguments are not compatible with secrets. See their individual documentation for details.
# First we declare that the secrets are stored in a directory of the filesystem
# each file contain one secret identified by the filename
c['secretsProviders'] = [secrets.SecretInAFile(dirname="/path/toSecretsFiles")]
# then for a reporter:
c['services'] = [GitHubStatusPush(token=util.Secret("githubToken"))]
2.4.2.3. Secrets storages¶
SecretInAFile¶
c['secretsProviders'] = [secrets.SecretInAFile(dirname="/path/toSecretsFiles")]
In the passed directory, every file contains a secret identified by the filename.
e.g: a file user
contains the text pa$$w0rd
.
Arguments:
dirname
- (required) Absolute path to directory containing the files with a secret.
strip
- (optional) if
True
(the default), trailing newlines are removed from the file contents.
SecretInVault¶
c['secretsProviders'] = [secrets.HashiCorpVaultSecretProvider(
vaultToken=open('VAULT_TOKEN').read().strip(),
vaultServer="http://localhost:8200",
secretsmount="secret",
apiVersion=2
)]
Vault secures, stores, and tightly controls access to secrets. Vault presents a unified API to access multiple backends. At the moment Buildbot supports KV v1 and v2 backends via the apiVersion argument.
Buildbot’s Vault authentication/authorisation is via a token. The “Initial Root Token”, generated on Vault initialization, can be used but has ‘root’ authorization. Vault policies, and subsequent tokens assigned to them, provide for a more restrictive approach.
In the master configuration, the Vault provider is instantiated through the Buildbot service manager as a secret provider with the Vault server address and the Vault token. The provider SecretInVault allows Buildbot to read secrets in Vault. For more information about Vault please visit: Vault: https://www.vaultproject.io/
SecretInPass¶
c['secretsProviders'] = [secrets.SecretInPass(
gpgPassphrase="passphrase",
dirname="/path/to/password/store"
)]
Passwords can be stored in a unix password store, encrypted using GPG keys.
Buildbot can query secrets via the pass
binary found in the PATH of each worker.
While pass
allows for multiline entries, the secret must be on the first line of each entry.
The only caveat is that all passwords Buildbot needs to access have to be encrypted using the same GPG key.
For more information about pass
, please visit pass: https://www.passwordstore.org/
Arguments:
gpgPassphrase
- (optional) Pass phrase to the GPG decryption key, if any
dirname
- (optional) Absolute path to the password store directory, defaults to ~/.password-store
2.4.2.4. How to populate secrets in a build¶
To populate secrets in files during a build, 2 steps are used to create and delete the files on the worker. The files will be automatically deleted at the end of the build.
f = BuildFactory()
with f.withSecrets(secrets_list):
f.addStep(step_definition)
or
f = BuildFactory()
f.addSteps([list_of_step_definitions], withSecrets=[secrets_list])
In both cases the secrets_list is a list of tuple (secret path, secret value).
secrets_list = [('/first/path', Interpolate('write something and %(secret:somethingmore)s')),
('/second/path', Interpolate('%(secret:othersecret)s')]
The Interpolate class is used to render the value during the build execution.
2.4.2.5. How to configure a Vault instance¶
Vault being a very generic system, it can be complex to install for the first time. Here is a simple tutorial to install the minimal Vault to use with Buildbot.
Use Docker to install Vault¶
A Docker image is available to help users installing Vault. Without any arguments, the command launches a Docker Vault developer instance, easy to use and test the functions. The developer version is already initialized and unsealed. To launch a Vault server please refer to the VaultDocker documentation:
In a shell:
docker run vault
Starting the vault instance¶
Once the Docker image is created, launch a shell terminal on the Docker image:
docker exec -i -t ``docker_vault_image_name`` /bin/sh
Then, export the environment variable VAULT_ADDR needed to init Vault.
export VAULT_ADDR='vault.server.adress'
Writing secrets¶
By default the official docker instance of Vault is initialized with a mount path of ‘secret’, a KV v1 secret engine, and a second KV engine (v2) at ‘secret/data’. Currently Buildbot is “hard wired” to expect KV v2 engines to reside within this “data” sub path. Provision is made to set a top level path via the “secretsmount” argument: defaults to “secret”. To add a new secret:
vault kv put secret/new_secret_key value=new_secret_value
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.5. Configuration¶
The following sections describe the configuration of the various Buildbot components. The information available here is sufficient to create basic build and test configurations, and does not assume great familiarity with Python.
In more advanced Buildbot configurations, Buildbot acts as a framework for a continuous-integration application. The next section, Customization, describes this approach, with frequent references into the development documentation.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.5.1. Configuring Buildbot¶
The Buildbot’s behavior is defined by the config file, which normally lives in the master.cfg
file in the buildmaster’s base directory (but this can be changed with an option to the buildbot create-master command).
This file completely specifies which Builder
s are to be run, which workers they should use, how Change
s should be tracked, and where the status information is to be sent.
The buildmaster’s buildbot.tac
file names the base directory; everything else comes from the config file.
A sample config file was installed for you when you created the buildmaster, but you will need to edit it before your Buildbot will do anything useful.
This chapter gives an overview of the format of this file and the various sections in it. You will need to read the later chapters to understand how to fill in each section properly.
2.5.1.1. Config File Format¶
The config file is, fundamentally, just a piece of Python code which defines a dictionary named BuildmasterConfig
, with a number of keys that are treated specially.
You don’t need to know Python to do basic configuration, though, you can just copy the syntax of the sample file.
If you are comfortable writing Python code, however, you can use all the power of a full programming language to achieve more complicated configurations.
The BuildmasterConfig
name is the only one which matters: all other names defined during the execution of the file are discarded.
When parsing the config file, the Buildmaster generally compares the old configuration with the new one and performs the minimum set of actions necessary to bring the Buildbot up to date: Builder
s which are not changed are left untouched, and Builder
s which are modified get to keep their old event history.
The beginning of the master.cfg
file typically starts with something like:
BuildmasterConfig = c = {}
Therefore a config key like change_source
will usually appear in master.cfg
as c['change_source']
.
See Buildmaster Configuration Index for a full list of BuildMasterConfig
keys.
Basic Python Syntax¶
The master configuration file is interpreted as Python, allowing the full flexibility of the language. For the configurations described in this section, a detailed knowledge of Python is not required, but the basic syntax is easily described.
Python comments start with a hash character #
, tuples are defined with (parenthesis, pairs)
, and lists (arrays) are defined with [square, brackets]
.
Tuples and lists are mostly interchangeable.
Dictionaries (data structures which map keys to values) are defined with curly braces: {'key1': value1, 'key2': value2}
.
Function calls (and object instantiation) can use named parameters, like steps.ShellCommand(command=["trial", "hello"])
.
The config file starts with a series of import
statements, which make various kinds of Step
s and Status
targets available for later use.
The main BuildmasterConfig
dictionary is created, then it is populated with a variety of keys, described section-by-section in subsequent chapters.
2.5.1.2. Predefined Config File Symbols¶
The following symbols are automatically available for use in the configuration file.
basedir
the base directory for the buildmaster. This string has not been expanded, so it may start with a tilde. It needs to be expanded before use. The config file is located in:
os.path.expanduser(os.path.join(basedir, 'master.cfg'))
__file__
- the absolute path of the config file.
The config file’s directory is located in
os.path.dirname(__file__)
.
2.5.1.3. Testing the Config File¶
To verify that the config file is well-formed and contains no deprecated or invalid elements, use the checkconfig
command, passing it either a master directory or a config file.
% buildbot checkconfig master.cfg
Config file is good!
# or
% buildbot checkconfig /tmp/masterdir
Config file is good!
If the config file has deprecated features (perhaps because you’ve upgraded the buildmaster and need to update the config file to match), they will be announced by checkconfig. In this case, the config file will work, but you should really remove the deprecated items and use the recommended replacements instead:
% buildbot checkconfig master.cfg
/usr/lib/python2.4/site-packages/buildbot/master.py:559: DeprecationWarning: c['sources'] is
deprecated as of 0.7.6 and will be removed by 0.8.0 . Please use c['change_source'] instead.
Config file is good!
If you have errors in your configuration file, checkconfig will let you know:
% buildbot checkconfig master.cfg
Configuration Errors:
c['workers'] must be a list of Worker instances
no workers are configured
builder 'smoketest' uses unknown workers 'linux-002'
If the config file is simply broken, that will be caught too:
% buildbot checkconfig master.cfg
error while parsing config file:
Traceback (most recent call last):
File "/home/buildbot/master/bin/buildbot", line 4, in <module>
runner.run()
File "/home/buildbot/master/buildbot/scripts/runner.py", line 1358, in run
if not doCheckConfig(so):
File "/home/buildbot/master/buildbot/scripts/runner.py", line 1079, in doCheckConfig
return cl.load(quiet=quiet)
File "/home/buildbot/master/buildbot/scripts/checkconfig.py", line 29, in load
self.basedir, self.configFileName)
--- <exception caught here> ---
File "/home/buildbot/master/buildbot/config.py", line 147, in loadConfig
exec f in localDict
exceptions.SyntaxError: invalid syntax (master.cfg, line 52)
Configuration Errors:
error while parsing config file: invalid syntax (master.cfg, line 52) (traceback in logfile)
2.5.1.4. Loading the Config File¶
The config file is only read at specific points in time. It is first read when the buildmaster is launched.
Note
If the configuration is invalid, the master will display the errors in the console output, but will not exit.
Reloading the Config File (reconfig)¶
If you are on the system hosting the buildmaster, you can send a SIGHUP
signal to it: the buildbot tool has a shortcut for this:
buildbot reconfig BASEDIR
This command will show you all of the lines from twistd.log
that relate to the reconfiguration.
If there are any problems during the config-file reload, they will be displayed in these lines.
When reloading the config file, the buildmaster will endeavor to change as little as possible about the running system.
For example, although old status targets may be shut down and new ones started up, any status targets that were not changed since the last time the config file was read will be left running and untouched.
Likewise any Builder
s which have not been changed will be left running.
If a Builder
is modified (say, the build command is changed), this change will apply only for new Build
s.
Any existing build that is currently running or was already queued will be allowed to finish using the old configuration.
Note that if any lock is renamed, old and new instances of the lock will be completely unrelated in the eyes of the buildmaster. This means that buildmaster will be able to start new builds that would otherwise have waited for the old lock to be released.
Warning
Buildbot’s reconfiguration system is fragile for a few difficult-to-fix reasons:
- Any modules imported by the configuration file are not automatically reloaded. Python modules such as http://pypi.python.org/pypi/lazy-reload may help here, but reloading modules is fraught with subtleties and difficult-to-decipher failure cases.
- During the reconfiguration, active internal objects are divorced from the service hierarchy, leading to tracebacks in the web interface and other components. These are ordinarily transient, but with HTTP connection caching (either by the browser or an intervening proxy) they can last for a long time.
- If the new configuration file is invalid, it is possible for Buildbot’s internal state to be corrupted, leading to undefined results. When this occurs, it is best to restart the master.
- For more advanced configurations, it is impossible for Buildbot to tell if the configuration for a
Builder
orScheduler
has changed, and thus theBuilder
orScheduler
will always be reloaded. This occurs most commonly when a callable is passed as a configuration parameter.
The bbproto project (at https://github.com/dabrahams/bbproto) may help to construct large (multi-file) configurations which can be effectively reloaded and reconfigured.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.5.2. Global Configuration¶
The keys in this section affect the operations of the buildmaster globally.
- Database Specification
- MQ Specification
- Multi-master mode
- Site Definitions
- Log Handling
- Data Lifetime
- Merging Build Requests
- Prioritizing Builders
- Setting the PB Port for Workers
- Defining Global Properties
- Manhole
- Metrics Options
- Statistics Service
secretsProviders
- BuildbotNetUsageData
- Users Options
- Input Validation
- Revision Links
- Codebase Generator
2.5.2.1. Database Specification¶
Buildbot requires a connection to a database to maintain certain state information, such as tracking pending build requests.
In the default configuration Buildbot uses a file-based SQLite database, stored in the state.sqlite
file of the master’s base directory.
Important
SQLite3 is perfectly suitable for small setups with a few users. However, it does not scale well with large numbers of builders, workers and users. If you expect your Buildbot to grow over time, it is strongly advised to use a real database server (e.g., MySQL or Postgres).
See the Using A Database Server section for more details.
Override this configuration with the db_url
parameter.
Buildbot accepts a database configuration in a dictionary named db
.
All keys are optional:
c['db'] = {
'db_url' : 'sqlite:///state.sqlite',
}
The db_url
key indicates the database engine to use.
The format of this parameter is completely documented at http://www.sqlalchemy.org/docs/dialects/, but is generally of the form:
"driver://[username:password@]host:port/database[?args]"
These parameters can be specified directly in the configuration dictionary, as c['db_url']
and c['db_poll_interval']
, although this method is deprecated.
The following sections give additional information for particular database backends:
SQLite¶
For sqlite databases, since there is no host and port, relative paths are specified with sqlite:///
and absolute paths with sqlite:////
.
Examples:
c['db_url'] = "sqlite:///state.sqlite"
SQLite requires no special configuration.
MySQL¶
c['db_url'] = "mysql://username:password@example.com/database_name?max_idle=300"
The max_idle
argument for MySQL connections is unique to Buildbot, and should be set to something less than the wait_timeout
configured for your server.
This controls the SQLAlchemy pool_recycle
parameter, which defaults to no timeout.
Setting this parameter ensures that connections are closed and re-opened after the configured amount of idle time.
If you see errors such as _mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away')
, this means your max_idle
setting is probably too high.
show global variables like 'wait_timeout';
will show what the currently configured wait_timeout
is on your MySQL server.
Buildbot requires use_unique=True
and charset=utf8
, and will add them automatically, so they do not need to be specified in db_url
.
MySQL defaults to the MyISAM storage engine, but this can be overridden with the storage_engine
URL argument.
Postgres¶
c['db_url'] = "postgresql://username:password@hostname/dbname"
PosgreSQL requires no special configuration.
2.5.2.2. MQ Specification¶
Buildbot uses a message-queueing system to handle communication within the master. Messages are used to indicate events within the master, and components that are interested in those events arrange to receive them.
The message queueing implementation is configured as a dictionary in the mq
option.
The type
key describes the type of MQ implementation to be used.
Note that the implementation type cannot be changed in a reconfig.
The available implementation types are described in the following sections.
Simple¶
c['mq'] = {
'type' : 'simple',
'debug' : False,
}
This is the default MQ implementation. Similar to SQLite, it has no additional software dependencies, but does not support multi-master mode.
Note that this implementation also does not support message persistence across a restart of the master. For example, if a change is received, but the master shuts down before the schedulers can create build requests for it, then those schedulers will not be notified of the change when the master starts again.
The debug
key, which defaults to False, can be used to enable logging of every message produced on this master.
Wamp¶
Note
At the moment, wamp is the only message queue implementation for multimaster.
It has been privileged as this is the only message queue that have very solid support for Twisted.
Other more common message queue systems like RabbitMQ
(using the AMQP
protocol) do not have convincing driver for twisted, and this would require to run on threads, which will add an important performance overhead.
c['mq'] = {
'type' : 'wamp',
'router_url': 'ws://localhost:8080/ws',
'realm': 'realm1',
# valid are: none, critical, error, warn, info, debug, trace
'wamp_debug_level' : 'error'
}
This is a MQ implementation using wamp protocol. This implementation uses Python Autobahn wamp client library, and is fully asynchronous (no use of threads). To use this implementation, you need a wamp router like Crossbar.
Please refer to Crossbar documentation for more details, but the default Crossbar setup will just work with Buildbot, provided you use the example mq
configuration above, and start Crossbar with:
# of course, you should work in a virtualenv...
pip install crossbar
crossbar init
crossbar start
The implementation does not yet support wamp authentication. This MQ allows buildbot to run in multi-master mode.
Note that this implementation also does not support message persistence across a restart of the master. For example, if a change is received, but the master shuts down before the schedulers can create build requests for it, then those schedulers will not be notified of the change when the master starts again.
router_url
(mandatory): points to your router websocket url.- Buildbot is only supporting wamp over websocket, which is a sub-protocol of http.
SSL is supported using
wss://
instead ofws://
.
realm
(optional, defaults to buildbot
): defines the wamp realm to use for your buildbot messages.
wamp_debug_level
(optional, defaults to error
): defines the log level of autobahn.
You must use a router with very reliable connection to the master. If for some reason, the wamp connection is lost, then the master will stop, and should be restarted via a process manager.
2.5.2.3. Multi-master mode¶
See Multimaster for details on the Multi-master mode in Buildbot Nine.
By default, Buildbot makes coherency checks that prevent typo in your master.cfg
It makes sure schedulers are not referencing unknown builders, and enforces there is at least one builder.
In the case of a asymmetric multimaster, those coherency checks can be harmful and prevent you to implement what you want. For example you might want to have one master dedicated to the UI, so that a big load generated by builds will not impact page load times.
To enable multi-master mode in this configuration, you will need to set the multiMaster
option so that buildbot doesn’t warn about missing schedulers or builders.
# Enable multiMaster mode; disables warnings about unknown builders and
# schedulers
c['multiMaster'] = True
c['db'] = {
'db_url' : 'mysql://...',
}
c['mq'] = { # Need to enable multimaster aware mq. Wamp is the only option for now.
'type' : 'wamp',
'router_url': 'ws://localhost:8080',
'realm': 'realm1',
# valid are: none, critical, error, warn, info, debug, trace
'wamp_debug_level' : 'error'
}
2.5.2.4. Site Definitions¶
Three basic settings describe the buildmaster in status reports:
c['title'] = "Buildbot"
c['titleURL'] = "http://buildbot.sourceforge.net/"
title
is a short string that will appear at the top of this buildbot installation’s home page (linked to the titleURL
).
titleURL
is a URL string that must end with a slash (/
).
HTML status displays will show title
as a link to titleURL
.
This URL is often used to provide a link from buildbot HTML pages to your project’s home page.
The buildbotURL
string should point to the location where the buildbot’s internal web server is visible.
This URL must end with a slash (/
).
When status notices are sent to users (e.g., by email or over IRC), buildbotURL
will be used to create a URL to the specific build or problem that they are being notified about.
2.5.2.5. Log Handling¶
c['logCompressionMethod'] = 'gz'
c['logMaxSize'] = 1024*1024 # 1M
c['logMaxTailSize'] = 32768
c['logEncoding'] = 'utf-8'
The logCompressionLimit
enables compression of build logs on disk for logs that are bigger than the given size, or disables that completely if set to False
.
The default value is 4096, which should be a reasonable default on most file systems.
This setting has no impact on status plugins, and merely affects the required disk space on the master for build logs.
The logCompressionMethod
controls what type of compression is used for build logs.
The default is ‘gz’, and the other valid option are ‘raw’ (no compression), ‘gz’ or ‘lz4’ (required lz4 package).
Please find below some stats extracted from 50x “trial Pyflakes” runs (results may differ according to log type).
compression | raw log size | compressed log size | space saving | compression speed |
---|---|---|---|---|
bz2 | 2.981 MB | 0.603 MB | 79.77% | 3.433 MB/s |
gz | 2.981 MB | 0.568 MB | 80.95% | 6.604 MB/s |
lz4 | 2.981 MB | 0.844 MB | 71.68% | 77.668 MB/s |
The logMaxSize
parameter sets an upper limit (in bytes) to how large logs from an individual build step can be.
The default value is None, meaning no upper limit to the log size.
Any output exceeding logMaxSize
will be truncated, and a message to this effect will be added to the log’s HEADER channel.
If logMaxSize
is set, and the output from a step exceeds the maximum, the logMaxTailSize
parameter controls how much of the end of the build log will be kept.
The effect of setting this parameter is that the log will contain the first logMaxSize
bytes and the last logMaxTailSize
bytes of output.
Don’t set this value too high, as the the tail of the log is kept in memory.
The logEncoding
parameter specifies the character encoding to use to decode bytestrings provided as logs.
It defaults to utf-8
, which should work in most cases, but can be overridden if necessary.
In extreme cases, a callable can be specified for this parameter.
It will be called with byte strings, and should return the corresponding Unicode string.
This setting can be overridden for a single build step with the logEncoding
step parameter.
It can also be overridden for a single log file by passing the logEncoding
parameter to addLog
.
2.5.2.6. Data Lifetime¶
Horizons¶
Previously Buildbot implemented a global configuration for horizons.
Now it is implemented as an utility Builder, and shall be configured via JanitorConfigurator
Caches¶
c['caches'] = {
'Changes' : 100, # formerly c['changeCacheSize']
'Builds' : 500, # formerly c['buildCacheSize']
'chdicts' : 100,
'BuildRequests' : 10,
'SourceStamps' : 20,
'ssdicts' : 20,
'objectids' : 10,
'usdicts' : 100,
}
The caches
configuration key contains the configuration for Buildbot’s in-memory caches.
These caches keep frequently-used objects in memory to avoid unnecessary trips to the database.
Caches are divided by object type, and each has a configurable maximum size.
The default size for each cache is 1, except where noted below. A value of 1 allows Buildbot to make a number of optimizations without consuming much memory. Larger, busier installations will likely want to increase these values.
The available caches are:
Changes
the number of change objects to cache in memory. This should be larger than the number of changes that typically arrive in the span of a few minutes, otherwise your schedulers will be reloading changes from the database every time they run. For distributed version control systems, like Git or Hg, several thousand changes may arrive at once, so setting this parameter to something like 10000 isn’t unreasonable.
This parameter is the same as the deprecated global parameter
changeCacheSize
. Its default value is 10.Builds
The
buildCacheSize
parameter gives the number of builds for each builder which are cached in memory. This number should be larger than the number of builds required for commonly-used status displays (the waterfall or grid views), so that those displays do not miss the cache on a refresh.This parameter is the same as the deprecated global parameter
buildCacheSize
. Its default value is 15.chdicts
- The number of rows from the
changes
table to cache in memory. This value should be similar to the value forChanges
. BuildRequests
- The number of BuildRequest objects kept in memory. This number should be higher than the typical number of outstanding build requests. If the master ordinarily finds jobs for BuildRequests immediately, you may set a lower value.
SourceStamps
- the number of SourceStamp objects kept in memory.
This number should generally be similar to the number
BuildRequesets
. ssdicts
- The number of rows from the
sourcestamps
table to cache in memory. This value should be similar to the value forSourceStamps
. objectids
- The number of object IDs - a means to correlate an object in the Buildbot configuration with an identity in the database–to cache. In this version, object IDs are not looked up often during runtime, so a relatively low value such as 10 is fine.
usdicts
The number of rows from the
users
table to cache in memory. Note that for a given user there will be a row for each attribute that user has.c[‘buildCacheSize’] = 15
2.5.2.7. Merging Build Requests¶
c['collapseRequests'] = True
This is a global default value for builders’ collapseRequests
parameter, and controls the merging of build requests.
This parameter can be overridden on a per-builder basis. See Collapsing Build Requests for the allowed values for this parameter.
2.5.2.8. Prioritizing Builders¶
def prioritizeBuilders(buildmaster, builders):
...
c['prioritizeBuilders'] = prioritizeBuilders
By default, buildbot will attempt to start builds on builders in order, beginning with the builder with the oldest pending request.
Customize this behavior with the prioritizeBuilders
configuration key, which takes a callable.
See Builder Priority Functions for details on this callable.
This parameter controls the order that the build master can start builds, and is useful in situations where there is resource contention between builders, e.g., for a test database. It does not affect the order in which a builder processes the build requests in its queue. For that purpose, see Prioritizing Builds.
2.5.2.9. Setting the PB Port for Workers¶
c['protocols'] = {"pb": {"port": 10000}}
The buildmaster will listen on a TCP port of your choosing for connections from workers. It can also use this port for connections from remote Change Sources, status clients, and debug tools. This port should be visible to the outside world, and you’ll need to tell your worker admins about your choice.
It does not matter which port you pick, as long it is externally visible; however, you should probably use something larger than 1024, since most operating systems don’t allow non-root processes to bind to low-numbered ports. If your buildmaster is behind a firewall or a NAT box of some sort, you may have to configure your firewall to permit inbound connections to this port.
c['protocols']['pb']['port']
can also be used as a connection string, as defined in the ConnectionStrings guide.
This means that you can have the buildmaster listen on a localhost-only port by doing:
c['protocols'] = {"pb": {"port": "tcp:10000:interface=127.0.0.1"}}
This might be useful if you only run workers on the same machine, and they are all configured to contact the buildmaster at localhost:10000
.
connection strings can also be used configure workers connecting over TLS. The syntax is then
c['protocols'] = {"pb": {"port":
"ssl:9989:privateKey=master.key:certKey=master.crt"}}
Please note that IPv6 addresses with : must be escaped with as well as : in paths and in paths. Read more about the connection strings format in ConnectionStrings documentation
See also Worker TLS Configuration
2.5.2.10. Defining Global Properties¶
The properties
configuration key defines a dictionary of properties that will be available to all builds started by the buildmaster:
c['properties'] = {
'Widget-version' : '1.2',
'release-stage' : 'alpha'
}
2.5.2.11. Manhole¶
If you set manhole
to an instance of one of the classes in buildbot.manhole
, you can telnet or ssh into the buildmaster and get an interactive Python shell, which may be useful for debugging buildbot internals.
It is probably only useful for buildbot developers.
It exposes full access to the buildmaster’s account (including the ability to modify and delete files), so it should not be enabled with a weak or easily guessable password.
There are three separate Manhole
classes.
Two of them use SSH, one uses unencrypted telnet.
Two of them use a username+password combination to grant access, one of them uses an SSH-style authorized_keys
file which contains a list of ssh public keys.
Note
Using any Manhole requires that cryptography
and pyasn1
be installed.
These are not part of the normal Buildbot dependencies.
- manhole.AuthorizedKeysManhole
- You construct this with the name of a file that contains one SSH public key per line, just like
~/.ssh/authorized_keys
. If you provide a non-absolute filename, it will be interpreted relative to the buildmaster’s base directory. You must also specify a directory which contains an SSH host key for the Manhole server. - manhole.PasswordManhole
- This one accepts SSH connections but asks for a username and password when authenticating. It accepts only one such pair. You must also specify a directory which contains an SSH host key for the Manhole server.
- manhole.TelnetManhole
- This accepts regular unencrypted telnet connections, and asks for a username/password pair before providing access. Because this username/password is transmitted in the clear, and because Manhole access to the buildmaster is equivalent to granting full shell privileges to both the buildmaster and all the workers (and to all accounts which then run code produced by the workers), it is highly recommended that you use one of the SSH manholes instead.
# some examples:
from buildbot.plugins import util
c['manhole'] = util.AuthorizedKeysManhole(1234, "authorized_keys",
ssh_hostkey_dir="/data/ssh_host_keys/")
c['manhole'] = util.PasswordManhole(1234, "alice", "mysecretpassword",
ssh_hostkey_dir="/data/ssh_host_keys/")
c['manhole'] = util.TelnetManhole(1234, "bob", "snoop_my_password_please")
The Manhole
instance can be configured to listen on a specific port.
You may wish to have this listening port bind to the loopback interface (sometimes known as lo0, localhost, or 127.0.0.1) to restrict access to clients which are running on the same host.
from buildbot.plugins import util
c['manhole'] = util.PasswordManhole("tcp:9999:interface=127.0.0.1","admin","passwd",
ssh_hostkey_dir="/data/ssh_host_keys/")
To have the Manhole
listen on all interfaces, use "tcp:9999"
or simply 9999.
This port specification uses twisted.application.strports
, so you can make it listen on SSL or even UNIX-domain sockets if you want.
Note that using any Manhole
requires that the TwistedConch package be installed.
The buildmaster’s SSH server will use a different host key than the normal sshd running on a typical unix host.
This will cause the ssh client to complain about a host key mismatch, because it does not realize there are two separate servers running on the same host.
To avoid this, use a clause like the following in your .ssh/config
file:
Host remotehost-buildbot
HostName remotehost
HostKeyAlias remotehost-buildbot
Port 9999
# use 'user' if you use PasswordManhole and your name is not 'admin'.
# if you use AuthorizedKeysManhole, this probably doesn't matter.
User admin
Using Manhole¶
After you have connected to a manhole instance, you will find yourself at a Python prompt.
You have access to two objects: master
(the BuildMaster) and status
(the master’s Status object).
Most interesting objects on the master can be reached from these two objects.
To aid in navigation, the show
method is defined.
It displays the non-method attributes of an object.
A manhole session might look like:
>>> show(master)
data attributes of <buildbot.master.BuildMaster instance at 0x7f7a4ab7df38>
basedir : '/home/dustin/code/buildbot/t/buildbot/'...
botmaster : <type 'instance'>
buildCacheSize : None
buildHorizon : None
buildbotURL : http://localhost:8010/
changeCacheSize : None
change_svc : <type 'instance'>
configFileName : master.cfg
db : <class 'buildbot.db.connector.DBConnector'>
db_url : sqlite:///state.sqlite
...
>>> show(master.botmaster.builders['win32'])
data attributes of <Builder ''builder'' at 48963528>
...
>>> win32 = _
>>> win32.category = 'w32'
2.5.2.12. Metrics Options¶
c['metrics'] = dict(log_interval=10, periodic_interval=10)
metrics
can be a dictionary that configures various aspects of the metrics subsystem.
If metrics
is None
, then metrics collection, logging and reporting will be disabled.
log_interval
determines how often metrics should be logged to twistd.log.
It defaults to 60s.
If set to 0 or None
, then logging of metrics will be disabled.
This value can be changed via a reconfig.
periodic_interval
determines how often various non-event based metrics are collected, such as memory usage, uncollectable garbage, reactor delay.
This defaults to 10s.
If set to 0 or None
, then periodic collection of this data is disabled.
This value can also be changed via a reconfig.
Read more about metrics in the Metrics section in the developer documentation.
2.5.2.13. Statistics Service¶
The Statistics Service (stats service for short) supports for collecting arbitrary data from within a running Buildbot instance and export it do a number of storage backends.
Currently, only InfluxDB is supported as a storage backend.
Also, InfluxDB (or any other storage backend) is not a mandatory dependency.
Buildbot can run without it although StatsService
will be of no use in such a case.
At present, StatsService
can keep track of build properties, build times (start, end, duration) and arbitrary data produced inside Buildbot (more on this later).
Example usage:
captures = [stats.CaptureProperty('Builder1', 'tree-size-KiB'),
stats.CaptureBuildDuration('Builder2')]
c['services'] = []
c['services'].append(stats.StatsService(
storage_backends=[
stats.InfluxStorageService('localhost', 8086, 'root', 'root', 'test', captures)
], name="StatsService"))
The services
configuration value should be initialized as a list and a StatsService
instance should be appended to it as shown in the example above.
Statistics Service¶
-
class
buildbot.statistics.stats_service.
StatsService
This is the main class for statistics service. It is initialized in the master configuration as show in the example above. It takes two arguments:
storage_backends
- A list of storage backends (see Storage Backends).
In the example above,
stats.InfluxStorageService
is an instance of a storage backend. Each storage backend is an instances of subclasses ofstatsStorageBase
. name
- The name of this service.
yieldMetricsValue
: This method can be used to send arbitrary data for storage. (See Using StatsService.yieldMetricsValue for more information.)
Capture Classes¶
-
class
buildbot.statistics.capture.
CaptureProperty
Instance of this class declares which properties must be captured and sent to the Storage Backends. It takes the following arguments:
builder_name
- The name of builder in which the property is recorded.
property_name
- The name of property needed to be recorded as a statistic.
callback=None
- (Optional) A custom callback function for this class. This callback function should take in two arguments - build_properties (dict) and property_name (str) and return a string that will be sent for storage in the storage backends.
regex=False
- If this is set to
True
, then the property name can be a regular expression. All properties matching this regular expression will be sent for storage.
-
class
buildbot.statistics.capture.
CapturePropertyAllBuilders
Instance of this class declares which properties must be captured on all builders and sent to the Storage Backends. It takes the following arguments:
property_name
- The name of property needed to be recorded as a statistic.
callback=None
- (Optional) A custom callback function for this class. This callback function should take in two arguments - build_properties (dict) and property_name (str) and return a string that will be sent for storage in the storage backends.
regex=False
- If this is set to
True
, then the property name can be a regular expression. All properties matching this regular expression will be sent for storage.
-
class
buildbot.statistics.capture.
CaptureBuildStartTime
Instance of this class declares which builders’ start times are to be captured and sent to Storage Backends. It takes the following arguments:
builder_name
- The name of builder whose times are to be recorded.
callback=None
- (Optional) A custom callback function for this class. This callback function should take in a Python datetime object and return a string that will be sent for storage in the storage backends.
-
class
buildbot.statistics.capture.
CaptureBuildStartTimeAllBuilders
Instance of this class declares start times of all builders to be captured and sent to Storage Backends. It takes the following arguments:
callback=None
- (Optional) A custom callback function for this class. This callback function should take in a Python datetime object and return a string that will be sent for storage in the storage backends.
-
class
buildbot.statistics.capture.
CaptureBuildEndTime
Exactly like
CaptureBuildStartTime
except it declares the builders whose end time is to be recorded. The arguments are same asCaptureBuildStartTime
.
-
class
buildbot.statistics.capture.
CaptureBuildEndTimeAllBuilders
Exactly like
CaptureBuildStartTimeAllBuilders
except it declares all builders’ end time to be recorded. The arguments are same asCaptureBuildStartTimeAllBuilders
.
-
class
buildbot.statistics.capture.
CaptureBuildDuration
Instance of this class declares the builders whose build durations are to be recorded. It takes the following arguments:
builder_name
- The name of builder whose times are to be recorded.
report_in='seconds'
- Can be one of three:
'seconds'
,'minutes'
, or'hours'
. This is the units in which the build time will be reported. callback=None
- (Optional) A custom callback function for this class.
This callback function should take in two Python datetime objects - a
start_time
and anend_time
and return a string that will be sent for storage in the storage backends.
-
class
buildbot.statistics.capture.
CaptureBuildDurationAllBuilders
Instance of this class declares build durations to be recorded for all builders. It takes the following arguments:
report_in='seconds'
- Can be one of three:
'seconds'
,'minutes'
, or'hours'
. This is the units in which the build time will be reported. callback=None
- (Optional) A custom callback function for this class.
This callback function should take in two Python datetime objects - a
start_time
and anend_time
and return a string that will be sent for storage in the storage backends.
-
class
buildbot.statistics.capture.
CaptureData
Instance of this capture class is for capturing arbitrary data that is not stored as build-data. Needs to be used in conjunction with
yieldMetricsValue
(See Using StatsService.yieldMetricsValue). Takes the following arguments:data_name
- The name of data to be captured.
Same as in
yieldMetricsValue
. builder_name
- The name of builder whose times are to be recorded.
callback=None
- The callback function for this class.
This callback receives the data sent to
yieldMetricsValue
aspost_data
(See Using StatsService.yieldMetricsValue). It must return a string that is to be sent to the storage backends for storage.
-
class
buildbot.statistics.capture.
CaptureDataAllBuilders
Instance of this capture class for capturing arbitrary data that is not stored as build-data on all builders. Needs to be used in conjunction with
yieldMetricsValue
(See Using StatsService.yieldMetricsValue). Takes the following arguments:data_name
- The name of data to be captured.
Same as in
yieldMetricsValue
. callback=None
- The callback function for this class.
This callback receives the data sent to
yieldMetricsValue
aspost_data
(See Using StatsService.yieldMetricsValue). It must return a string that is to be sent to the storage backends for storage.
Using StatsService.yieldMetricsValue
¶
Advanced users can modify BuildSteps
to use StatsService.yieldMetricsValue
which will send arbitrary data for storage to the StatsService
.
It takes the following arguments:
data_name
- The name of the data being sent or storage.
post_data
- A dictionary of key value pair that is sent for storage. The keys will act as columns in a database and the value is stored under that column.
buildid
- The integer build id of the current build. Obtainable in all
BuildSteps
.
Along with using yieldMetricsValue
, the user will also need to use the CaptureData
capture class.
As an example, we can add the following to a build step:
yieldMetricsValue('test_data_name', {'some_data': 'some_value'}, buildid)
Then, we can add in the master configuration a capture class like this:
captures = [CaptureBuildData('test_data_name', 'Builder1')]
Pass this captures
list to a storage backend (as shown in the example at the top of this section) for capturing this data.
Storage Backends¶
Storage backends are responsible for storing any statistics data sent to them.
A storage backend will generally be some sort of a database-server running on a machine.
(Note: This machine may be different from the one running BuildMaster
)
Currently, only InfluxDB is supported as a storage backend.
-
class
buildbot.statistics.storage_backends.influxdb_client.
InfluxStorageService
This class is a Buildbot client to the InfluxDB storage backend. InfluxDB is a distributed, time series database that employs a key-value pair storage system.
It requires the following arguments:
url
- The URL where the service is running.
port
- The port on which the service is listening.
user
- Username of a InfluxDB user.
password
- Password for
user
. db
- The name of database to be used.
captures
- A list of objects of Capture Classes. This tells which statistics are to be stored in this storage backend.
name=None
- (Optional) The name of this storage backend.
2.5.2.14. secretsProviders
¶
see Secret Management for details on secret concepts.
Example usage:
c['secretsProviders'] = [ .. ]
secretsProviders
is a list of secrets storage.
See Secret Management to configure an available secret storage provider.
2.5.2.15. BuildbotNetUsageData¶
Since buildbot 0.9.0, buildbot has a simple feature which sends usage analysis info to buildbot.net. This is very important for buildbot developers to understand how the community is using the tools. This allows to better prioritize issues, and understand what plugins are actually being used. This will also be a tool to decide whether to keep support for very old tools. For example buildbot contains support for the venerable CVS, but we have no information whether it actually works beyond the unit tests. We rely on the community to test and report issues with the old features.
With BuildbotNetUsageData, we can know exactly what combination of plugins are working together, how much people are customizing plugins, what versions of the main dependencies people run.
We take your privacy very seriously.
BuildbotNetUsageData will never send information specific to your Code or Intellectual Property. No repository url, shell command values, host names, ip address or custom class names. If it does, then this is a bug, please report.
We still need to track unique number for installation. This is done via doing a sha1 hash of master’s hostname, installation path and fqdn. Using a secure hash means there is no way of knowing hostname, path and fqdn given the hash, but still there is a different hash for each master.
You can see exactly what is sent in the master’s twisted.log. Usage data is sent every time the master is started.
BuildbotNetUsageData can be configured with 4 values:
c['buildbotNetUsageData'] = None
disables the featurec['buildbotNetUsageData'] = 'basic'
sends the basic information to buildbot including:- versions of buildbot, python and twisted
- platform information (CPU, OS, distribution, python flavor (i.e CPython vs PyPy))
- mq and database type (mysql or sqlite?)
- www plugins usage
- Plugins usages: This counts the number of time each class of buildbot is used in your configuration. This counts workers, builders, steps, schedulers, change sources. If the plugin is subclassed, then it will be prefixed with a >
example of basic report (for the metabuildbot):
{ 'versions': { 'Python': '2.7.6', 'Twisted': '15.5.0', 'Buildbot': '0.9.0rc2-176-g5fa9dbf' }, 'platform': { 'machine': 'x86_64', 'python_implementation': 'CPython', 'version': '#140-Ubuntu SMP Mon Jul', 'processor': 'x86_64', 'distro:': ('Ubuntu', '14.04', 'trusty') }, 'db': 'sqlite', 'mq': 'simple', 'plugins': { 'buildbot.schedulers.forcesched.ForceScheduler': 2, 'buildbot.schedulers.triggerable.Triggerable': 1, 'buildbot.config.BuilderConfig': 4, 'buildbot.schedulers.basic.AnyBranchScheduler': 2, 'buildbot.steps.source.git.Git': 4, '>>buildbot.steps.trigger.Trigger': 2, '>>>buildbot.worker.base.Worker': 4, 'buildbot.reporters.irc.IRC': 1, '>>>buildbot.process.buildstep.LoggingBuildStep': 2}, 'www_plugins': ['buildbot_travis', 'waterfall_view'] }
c['buildbotNetUsageData'] = 'full'
sends the basic information plus additional information:- configuration of each builders: how the steps are arranged together. for example:
{ 'builders': [ ['buildbot.steps.source.git.Git', '>>>buildbot.process.buildstep.LoggingBuildStep'], ['buildbot.steps.source.git.Git', '>>buildbot.steps.trigger.Trigger'], ['buildbot.steps.source.git.Git', '>>>buildbot.process.buildstep.LoggingBuildStep'], ['buildbot.steps.source.git.Git', '>>buildbot.steps.trigger.Trigger'] ] }
c['buildbotNetUsageData'] = myCustomFunction
. You can also specify exactly what to send using a callback.The custom function will take the generated data from full report in the form of a dictionary, and return a customized report as a jsonable dictionary. You can use this to filter any information you don’t want to disclose. You can use a custom http_proxy environment variable in order to not send any data while developing your callback.
2.5.2.16. Users Options¶
from buildbot.plugins import util
c['user_managers'] = []
c['user_managers'].append(util.CommandlineUserManager(username="user",
passwd="userpw",
port=9990))
user_managers
contains a list of ways to manually manage User Objects within Buildbot (see User Objects).
Currently implemented is a commandline tool buildbot user, described at length in user
.
In the future, a web client will also be able to manage User Objects and their attributes.
As shown above, to enable the buildbot user tool, you must initialize a CommandlineUserManager instance in your master.cfg. CommandlineUserManager instances require the following arguments:
username
- This is the username that will be registered on the PB connection and need to be used when calling buildbot user.
passwd
- This is the passwd that will be registered on the PB connection and need to be used when calling buildbot user.
port
- The PB connection port must be different than c[‘protocols’][‘pb’][‘port’] and be specified when calling buildbot user
2.5.2.17. Input Validation¶
import re
c['validation'] = {
'branch' : re.compile(r'^[\w.+/~-]*$'),
'revision' : re.compile(r'^[ \w\.\-\/]*$'),
'property_name' : re.compile(r'^[\w\.\-\/\~:]*$'),
'property_value' : re.compile(r'^[\w\.\-\/\~:]*$'),
}
This option configures the validation applied to user inputs of various types. This validation is important since these values are often included in command-line arguments executed on workers. Allowing arbitrary input from untrusted users may raise security concerns.
The keys describe the type of input validated; the values are compiled regular expressions against which the input will be matched. The defaults for each type of input are those given in the example, above.
2.5.2.18. Revision Links¶
The revlink
parameter is used to create links from revision IDs in the web status to a web-view of your source control system.
The parameter’s value must be a callable.
By default, Buildbot is configured to generate revlinks for a number of open source hosting platforms.
The callable takes the revision id and repository argument, and should return an URL to the revision. Note that the revision id may not always be in the form you expect, so code defensively. In particular, a revision of “??” may be supplied when no other information is available.
Note that SourceStamp
s that are not created from version-control changes (e.g., those created by a Nightly
or Periodic
scheduler) may have an empty repository string, if the repository is not known to the scheduler.
Revision Link Helpers¶
Buildbot provides two helpers for generating revision links.
buildbot.revlinks.RevlinkMatcher
takes a list of regular expressions, and replacement text.
The regular expressions should all have the same number of capture groups.
The replacement text should have sed-style references to that capture groups (i.e. ‘1’ for the first capture group), and a single ‘%s’ reference, for the revision ID.
The repository given is tried against each regular expression in turn.
The results are the substituted into the replacement text, along with the revision ID to obtain the revision link.
from buildbot.plugins import util
c['revlink'] = util.RevlinkMatch([r'git://notmuchmail.org/git/(.*)'],
r'http://git.notmuchmail.org/git/\1/commit/%s')
buildbot.revlinks.RevlinkMultiplexer
takes a list of revision link callables, and tries each in turn, returning the first successful match.
2.5.2.19. Codebase Generator¶
all_repositories = {
r'https://hg/hg/mailsuite/mailclient': 'mailexe',
r'https://hg/hg/mailsuite/mapilib': 'mapilib',
r'https://hg/hg/mailsuite/imaplib': 'imaplib',
r'https://github.com/mailinc/mailsuite/mailclient': 'mailexe',
r'https://github.com/mailinc/mailsuite/mapilib': 'mapilib',
r'https://github.com/mailinc/mailsuite/imaplib': 'imaplib',
}
def codebaseGenerator(chdict):
return all_repositories[chdict['repository']]
c['codebaseGenerator'] = codebaseGenerator
For any incoming change, the codebase is set to ‘’. This codebase value is sufficient if all changes come from the same repository (or clones). If changes come from different repositories, extra processing will be needed to determine the codebase for the incoming change. This codebase will then be a logical name for the combination of repository and or branch etc.
The codebaseGenerator accepts a change dictionary as produced by the buildbot.db.changes.ChangesConnectorComponent
, with a changeid equal to None.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.5.3. Change Sources and Changes¶
- How Different VC Systems Specify Sources
- Choosing a Change Source
- Configuring Change Sources
- Mail-parsing ChangeSources
- PBChangeSource
- P4Source
- SVNPoller
- Bzr Poller
- GitPoller
- HgPoller
- GitHubPullrequestPoller
- BitbucketPullrequestPoller
- GerritChangeSource
- GerritEventLogPoller
- GerritChangeFilter
- Change Hooks (HTTP Notifications)
A change source is the mechanism which is used by Buildbot to get information about new changes in a repository maintained by a Version Control System.
These change sources fall broadly into two categories: pollers which periodically check the repository for updates; and hooks, where the repository is configured to notify Buildbot whenever an update occurs.
A Change
is an abstract way that Buildbot uses to represent changes in any of the Version Control Systems it supports. It contains just enough information needed to acquire specific version of the tree when needed. This usually happens as one of the first steps in a Build
.
This concept does not map perfectly to every version control system. For example, for CVS Buildbot must guess that version updates made to multiple files within a short time represent a single change.
Change
s can be provided by a variety of ChangeSource
types, although any given project will typically have only a single ChangeSource
active.
2.5.3.1. How Different VC Systems Specify Sources¶
For CVS, the static specifications are repository and module.
In addition to those, each build uses a timestamp (or omits the timestamp to mean the latest) and branch tag (which defaults to HEAD
).
These parameters collectively specify a set of sources from which a build may be performed.
Subversion, combines the repository, module, and branch into a single Subversion URL parameter.
Within that scope, source checkouts can be specified by a numeric revision number (a repository-wide monotonically-increasing marker, such that each transaction that changes the repository is indexed by a different revision number), or a revision timestamp.
When branches are used, the repository and module form a static baseURL
, while each build has a revision number and a branch (which defaults to a statically-specified defaultBranch
).
The baseURL
and branch
are simply concatenated together to derive the repourl
to use for the checkout.
Perforce is similar.
The server is specified through a P4PORT
parameter.
Module and branch are specified in a single depot path, and revisions are depot-wide.
When branches are used, the p4base
and defaultBranch
are concatenated together to produce the depot path.
Bzr (which is a descendant of Arch/Bazaar, and is frequently referred to as “Bazaar”) has the same sort of repository-vs-workspace model as Arch, but the repository data can either be stored inside the working directory or kept elsewhere (either on the same machine or on an entirely different machine). For the purposes of Buildbot (which never commits changes), the repository is specified with a URL and a revision number.
The most common way to obtain read-only access to a bzr tree is via HTTP, simply by making the repository visible through a web server like Apache.
Bzr can also use FTP and SFTP servers, if the worker process has sufficient privileges to access them.
Higher performance can be obtained by running a special Bazaar-specific server.
None of these matter to the buildbot: the repository URL just has to match the kind of server being used.
The repoURL
argument provides the location of the repository.
Branches are expressed as subdirectories of the main central repository, which means that if branches are being used, the BZR step is given a baseURL
and defaultBranch
instead of getting the repoURL
argument.
Darcs doesn’t really have the notion of a single master repository.
Nor does it really have branches.
In Darcs, each working directory is also a repository, and there are operations to push and pull patches from one of these repositories
to another.
For the Buildbot’s purposes, all you need to do is specify the URL of a repository that you want to build from.
The worker will then pull the latest patches from that repository and build them.
Multiple branches are implemented by using multiple repositories (possibly living on the same server).
Builders which use Darcs therefore have a static repourl
which specifies the location of the repository.
If branches are being used, the source Step is instead configured with a baseURL
and a defaultBranch
, and the two strings are simply concatenated together to obtain the repository’s URL.
Each build then has a specific branch which replaces defaultBranch
, or just uses the default one.
Instead of a revision number, each build can have a context
, which is a string that records all the patches that are present in a given tree (this is the output of darcs changes --context
, and is considerably less concise than, e.g. Subversion’s revision number, but the patch-reordering flexibility of Darcs makes it impossible to provide a shorter useful specification).
Mercurial follows a decentralized model, and each repository can have several branches and tags.
The source Step is configured with a static repourl
which specifies the location of the repository.
Branches are configured with the defaultBranch
argument.
The revision is the hash identifier returned by hg identify
.
Git also follows a decentralized model, and each repository can have several branches and tags.
The source Step is configured with a static repourl
which specifies the location of the repository.
In addition, an optional branch
parameter can be specified to check out code from a specific branch instead of the default master branch.
The revision is specified as a SHA1 hash as returned by e.g. git rev-parse
.
No attempt is made to ensure that the specified revision is actually a subset of the specified branch.
Monotone is another that follows a decentralized model where each repository can have several branches and tags.
The source Step is configured with static repourl
and branch
parameters, which specifies the location of the repository and the branch to use.
The revision is specified as a SHA1 hash as returned by e.g. mtn automate select w:
.
No attempt is made to ensure that the specified revision is actually a subset of the specified branch.
Comparison¶
Name | Change | Revision | Branches |
---|---|---|---|
CVS | patch [1] | timestamp | unnamed |
Subversion | revision | integer | directories |
Git | commit | sha1 hash | named refs |
Mercurial | changeset | sha1 hash | different repos or (permanently) named commits |
Darcs | ? | none [2] | different repos |
Bazaar | ? | ? | ? |
Perforce | ? | ? | ? |
BitKeeper | changeset | ? | different repos |
- [1] note that CVS only tracks patches to individual files. Buildbot tries to recognize coordinated changes to multiple files by correlating change times.
- [2] Darcs does not have a concise way of representing a particular revision of the source.
Tree Stability¶
Changes tend to arrive at a buildmaster in bursts. In many cases, these bursts of changes are meant to be taken together. For example, a developer may have pushed multiple commits to a DVCS that comprise the same new feature or bugfix. To avoid trying to build every change, Buildbot supports the notion of tree stability, by waiting for a burst of changes to finish before starting to schedule builds. This is implemented as a timer, with builds not scheduled until no changes have occurred for the duration of the timer.
2.5.3.2. Choosing a Change Source¶
There are a variety of ChangeSource
classes available, some of which are meant to be used in conjunction with other tools to deliver Change
events from the VC repository to the buildmaster.
As a quick guide, here is a list of VC systems and the ChangeSource
s that might be useful with them.
Note that some of these modules are in Buildbot’s master/contrib directory, meaning that they have been offered by other users in hopes they may be useful, and might require some additional work to make them functional.
CVS
CVSMaildirSource
(watching mail sent by master/contrib/buildbot_cvs_mail.py script)PBChangeSource
(listening for connections frombuildbot sendchange
run in a loginfo script)PBChangeSource
(listening for connections from a long-running master/contrib/viewcvspoll.py polling process which examines the ViewCVS database directly)Change Hooks
in WebStatus
SVN
PBChangeSource
(listening for connections from master/contrib/svn_buildbot.py run in a postcommit script)PBChangeSource
(listening for connections from a long-running master/contrib/svn_watcher.py or master/contrib/svnpoller.py polling processSVNCommitEmailMaildirSource
(watching for email sent bycommit-email.pl
)SVNPoller
(polling the SVN repository)Change Hooks
in WebStatus
Darcs
PBChangeSource
(listening for connections from master/contrib/darcs_buildbot.py in a commit script)Change Hooks
in WebStatus
Mercurial
Change Hooks
in WebStatus (including master/contrib/hgbuildbot.py, configurable in achangegroup
hook)- BitBucket change hook (specifically designed for BitBucket notifications, but requiring a publicly-accessible WebStatus)
HgPoller
(polling a remote Mercurial repository)BitbucketPullrequestPoller
(polling Bitbucket for pull requests)- Mail-parsing ChangeSources, though there are no ready-to-use recipes
Bzr (the newer Bazaar)
PBChangeSource
(listening for connections from master/contrib/bzr_buildbot.py run in a post-change-branch-tip or commit hook)BzrPoller
(polling the Bzr repository)Change Hooks
in WebStatus
Git
PBChangeSource
(listening for connections from master/contrib/git_buildbot.py run in the post-receive hook)PBChangeSource
(listening for connections from master/contrib/github_buildbot.py, which listens for notifications from GitHub)Change Hooks
in WebStatusGitHub
change hook (specifically designed for GitHub notifications, but requiring a publicly-accessible WebStatus)BitBucket
change hook (specifically designed for BitBucket notifications, but requiring a publicly-accessible WebStatus)GitPoller
(polling a remote Git repository)GitHubPullrequestPoller
(polling GitHub API for pull requests)BitbucketPullrequestPoller
(polling Bitbucket for pull requests)
Repo/Gerrit
GerritChangeSource
connects to Gerrit via SSH to get a live stream of changesGerritEventLogPoller
connects to Gerrit via HTTP with the help of the plugin events-log
Monotone
PBChangeSource
(listening for connections frommonotone-buildbot.lua
, which is available with Monotone)
All VC systems can be driven by a PBChangeSource
and the buildbot sendchange
tool run from some form of commit script.
If you write an email parsing function, they can also all be driven by a suitable mail-parsing source.
Additionally, handlers for web-based notification (i.e. from GitHub) can be used with WebStatus’ change_hook module.
The interface is simple, so adding your own handlers (and sharing!) should be a breeze.
See Change Source Index for a full list of change sources.
2.5.3.3. Configuring Change Sources¶
The change_source
configuration key holds all active change sources for the configuration.
Most configurations have a single ChangeSource
, watching only a single tree, e.g.,
from buildbot.plugins import changes
c['change_source'] = changes.PBChangeSource()
For more advanced configurations, the parameter can be a list of change sources:
source1 = ...
source2 = ...
c['change_source'] = [
source1, source2
]
Repository and Project¶
ChangeSource
s will, in general, automatically provide the proper repository
attribute for any changes they produce.
For systems which operate on URL-like specifiers, this is a repository URL.
Other ChangeSource
s adapt the concept as necessary.
Many ChangeSource
s allow you to specify a project, as well.
This attribute is useful when building from several distinct codebases in the same buildmaster: the project string can serve to differentiate the different codebases.
Schedulers can filter on project, so you can configure different builders to run for each project.
2.5.3.4. Mail-parsing ChangeSources¶
Many projects publish information about changes to their source tree by sending an email message out to a mailing list, frequently named PROJECT-commits
or PROJECT-changes
.
Each message usually contains a description of the change (who made the change, which files were affected) and sometimes a copy of the diff.
Humans can subscribe to this list to stay informed about what’s happening to the source tree.
The Buildbot can also be subscribed to a -commits mailing list, and can trigger builds in response to Changes that it hears about. The buildmaster admin needs to arrange for these email messages to arrive in a place where the buildmaster can find them, and configure the buildmaster to parse the messages correctly. Once that is in place, the email parser will create Change objects and deliver them to the schedulers (see Schedulers) just like any other ChangeSource.
There are two components to setting up an email-based ChangeSource. The first is to route the email messages to the buildmaster, which is done by dropping them into a maildir. The second is to actually parse the messages, which is highly dependent upon the tool that was used to create them. Each VC system has a collection of favorite change-emailing tools, and each has a slightly different format, so each has a different parsing function. There is a separate ChangeSource variant for each parsing function.
Once you’ve chosen a maildir location and a parsing function, create the change source and put it in change_source
:
from buildbot.plugins import changes
c['change_source'] = changes.CVSMaildirSource("~/maildir-buildbot",
prefix="/trunk/")
Subscribing the Buildmaster¶
The recommended way to install the Buildbot is to create a dedicated account for the buildmaster. If you do this, the account will probably have a distinct email address (perhaps buildmaster@example.org). Then just arrange for this account’s email to be delivered to a suitable maildir (described in the next section).
If the Buildbot does not have its own account, extension addresses can be used to distinguish between email intended for the buildmaster and email intended for the rest of the account.
In most modern MTAs, the e.g. foo@example.org account has control over every email address at example.org which begins with “foo”, such that email addressed to account-foo@example.org can be delivered to a different destination than account-bar@example.org.
qmail does this by using separate .qmail
files for the two destinations (.qmail-foo
and .qmail-bar
, with .qmail
controlling the base address and .qmail-default
controlling all other extensions).
Other MTAs have similar mechanisms.
Thus you can assign an extension address like foo-buildmaster@example.org to the buildmaster, and retain foo@example.org for your own use.
Using Maildirs¶
A maildir is a simple directory structure originally developed for qmail that allows safe atomic update without locking.
Create a base directory with three subdirectories: new
, tmp
, and cur
.
When messages arrive, they are put into a uniquely-named file (using pids, timestamps, and random numbers) in tmp
. When the file is complete, it is atomically renamed into new
. Eventually the buildmaster notices the file in new
, reads and parses the contents, then moves it into cur
. A cronjob can be used to delete files in cur
at leisure.
Maildirs are frequently created with the maildirmake tool, but a simple mkdir -p ~/MAILDIR/cur,new,tmp
is pretty much equivalent.
Many modern MTAs can deliver directly to maildirs.
The usual .forward
or .procmailrc
syntax is to name the base directory with a trailing slash, so something like ~/MAILDIR/
.
qmail and postfix are maildir-capable MTAs, and procmail is a maildir-capable MDA (Mail Delivery Agent).
Here is an example procmail config, located in ~/.procmailrc
:
# .procmailrc
# routes incoming mail to appropriate mailboxes
PATH=/usr/bin:/usr/local/bin
MAILDIR=$HOME/Mail
LOGFILE=.procmail_log
SHELL=/bin/sh
:0
*
new
If procmail is not setup on a system wide basis, then the following one-line .forward
file will invoke it.
!/usr/bin/procmail
For MTAs which cannot put files into maildirs directly, the safecat tool can be executed from a .forward
file to accomplish the same thing.
The Buildmaster uses the linux DNotify facility to receive immediate notification when the maildir’s new
directory has changed.
When this facility is not available, it polls the directory for new messages, every 10 seconds by default.
Parsing Email Change Messages¶
The second component to setting up an email-based ChangeSource
is to parse the actual notices.
This is highly dependent upon the VC system and commit script in use.
A couple of common tools used to create these change emails, along with the Buildbot tools to parse them, are:
- CVS
- Buildbot CVS MailNotifier
CVSMaildirSource
- SVN
- svnmailer
- http://opensource.perlig.de/en/svnmailer/
commit-email.pl
SVNCommitEmailMaildirSource
- Bzr
- Launchpad
BzrLaunchpadEmailMaildirSource
- Mercurial
- NotifyExtension
- https://www.mercurial-scm.org/wiki/NotifyExtension
- Git
The following sections describe the parsers available for each of these tools.
Most of these parsers accept a prefix=
argument, which is used to limit the set of files that the buildmaster pays attention to.
This is most useful for systems like CVS and SVN which put multiple projects in a single repository (or use repository names to indicate branches).
Each filename that appears in the email is tested against the prefix: if the filename does not start with the prefix, the file is ignored.
If the filename does start with the prefix, that prefix is stripped from the filename before any further processing is done.
Thus the prefix usually ends with a slash.
CVSMaildirSource¶
-
class
buildbot.changes.mail.
CVSMaildirSource
¶
This parser works with the master/contrib/buildbot_cvs_mail.py script.
The script sends an email containing all the files submitted in one directory.
It is invoked by using the CVSROOT/loginfo
facility.
The Buildbot’s CVSMaildirSource
knows how to parse these messages and turn them into Change objects.
It takes the directory name of the maildir root.
For example:
from buildbot.plugins import changes
c['change_source'] = changes.CVSMaildirSource("/home/buildbot/Mail")
CVS must be configured to invoke the buildbot_cvs_mail.py script when files are checked in. This is done via the CVS loginfo configuration file.
To update this, first do:
cvs checkout CVSROOT
cd to the CVSROOT directory and edit the file loginfo, adding a line like:
SomeModule /cvsroot/CVSROOT/buildbot_cvs_mail.py --cvsroot :ext:example.com:/cvsroot -e buildbot -P SomeModule %@{sVv@}
Note
For cvs version 1.12.x, the --path %p
option is required.
Version 1.11.x and 1.12.x report the directory path differently.
The above example you put the buildbot_cvs_mail.py script under /cvsroot/CVSROOT.
It can be anywhere.
Run the script with --help
to see all the options.
At the very least, the options -e
(email) and -P
(project) should be specified.
The line must end with %{sVv}
.
This is expanded to the files that were modified.
Additional entries can be added to support more modules.
See buildbot_cvs_mail.py --help for more information on the available options.
SVNCommitEmailMaildirSource¶
-
class
buildbot.changes.mail.
SVNCommitEmailMaildirSource
¶
SVNCommitEmailMaildirSource
parses message sent out by the commit-email.pl
script, which is included in the Subversion distribution.
It does not currently handle branches: all of the Change objects that it creates will be associated with the default (i.e. trunk) branch.
from buildbot.plugins import changes
c['change_source'] = changes.SVNCommitEmailMaildirSource("~/maildir-buildbot")
BzrLaunchpadEmailMaildirSource¶
-
class
buildbot.changes.mail.
BzrLaunchpadEmailMaildirSource
¶
BzrLaunchpadEmailMaildirSource
parses the mails that are sent to addresses that subscribe to branch revision notifications for a bzr branch hosted on Launchpad.
The branch name defaults to lp:Launchpad path
.
For example lp:~maria-captains/maria/5.1
.
If only a single branch is used, the default branch name can be changed by setting defaultBranch
.
For multiple branches, pass a dictionary as the value of the branchMap
option to map specific repository paths to specific branch names (see example below).
The leading lp:
prefix of the path is optional.
The prefix
option is not supported (it is silently ignored).
Use the branchMap
and defaultBranch
instead to assign changes to branches (and just do not subscribe the Buildbot to branches that are not of interest).
The revision number is obtained from the email text. The bzr revision id is not available in the mails sent by Launchpad. However, it is possible to set the bzr append_revisions_only option for public shared repositories to avoid new pushes of merges changing the meaning of old revision numbers.
from buildbot.plugins import changes
bm = {
'lp:~maria-captains/maria/5.1': '5.1',
'lp:~maria-captains/maria/6.0': '6.0'
}
c['change_source'] = changes.BzrLaunchpadEmailMaildirSource("~/maildir-buildbot",
branchMap=bm)
2.5.3.5. PBChangeSource¶
-
class
buildbot.changes.pb.
PBChangeSource
¶
PBChangeSource
actually listens on a TCP port for clients to connect and push change notices into the Buildmaster.
This is used by the built-in buildbot sendchange
notification tool, as well as several version-control hook scripts.
This change is also useful for creating new kinds of change sources that work on a push model instead of some kind of subscription scheme, for example a script which is run out of an email .forward
file.
This ChangeSource always runs on the same TCP port as the workers.
It shares the same protocol, and in fact shares the same space of “usernames”, so you cannot configure a PBChangeSource
with the same name as a worker.
If you have a publicly accessible worker port, and are using PBChangeSource
, you must establish a secure username and password for the change source.
If your sendchange credentials are known (e.g., the defaults), then your buildmaster is susceptible to injection of arbitrary changes, which (depending on the build factories) could lead to arbitrary code execution on workers.
The PBChangeSource
is created with the following arguments.
port
- which port to listen on.
If
None
(which is the default), it shares the port used for worker connections. user
- The user account that the client program must use to connect.
Defaults to
change
passwd
- The password for the connection - defaults to
changepw
. Can be a Secret. Do not use this default on a publicly exposed port! prefix
The prefix to be found and stripped from filenames delivered over the connection, defaulting to
None
. Any filenames which do not start with this prefix will be removed. If all the filenames in a given Change are removed, the that whole Change will be dropped. This string should probably end with a directory separator.This is useful for changes coming from version control systems that represent branches as parent directories within the repository (like SVN and Perforce). Use a prefix of
trunk/
orproject/branches/foobranch/
to only follow one branch and to get correct tree-relative filenames. Without a prefix, thePBChangeSource
will probably deliver Changes with filenames liketrunk/foo.c
instead of justfoo.c
. Of course this also depends upon the tool sending the Changes in (likebuildbot sendchange
) and what filenames it is delivering: that tool may be filtering and stripping prefixes at the sending end.
For example:
from buildbot.plugins import changes
c['change_source'] = changes.PBChangeSource(port=9999, user='laura', passwd='fpga')
The following hooks are useful for sending changes to a PBChangeSource
:
Bzr Hook¶
Bzr is also written in Python, and the Bzr hook depends on Twisted to send the changes.
To install, put master/contrib/bzr_buildbot.py in one of your plugins locations a bzr plugins directory (e.g., ~/.bazaar/plugins
).
Then, in one of your bazaar conf files (e.g., ~/.bazaar/locations.conf
), set the location you want to connect with Buildbot with these keys:
buildbot_on
one of ‘commit’, ‘push, or ‘change’. Turns the plugin on to report changes via commit, changes via push, or any changes to the trunk. ‘change’ is recommended.buildbot_server
(required to send to a Buildbot master) the URL of the Buildbot master to which you will connect (as of this writing, the same server and port to which workers connect).buildbot_port
(optional, defaults to 9989) the port of the Buildbot master to which you will connect (as of this writing, the same server and port to which workers connect)buildbot_pqm
(optional, defaults to not pqm) Normally, the user that commits the revision is the user that is responsible for the change. When run in a pqm (Patch Queue Manager, see https://launchpad.net/pqm) environment, the user that commits is the Patch Queue Manager, and the user that committed the parent revision is responsible for the change. To turn on the pqm mode, set this value to any of (case-insensitive) “Yes”, “Y”, “True”, or “T”.buildbot_dry_run
(optional, defaults to not a dry run) Normally, the post-commit hook will attempt to communicate with the configured Buildbot server and port. If this parameter is included and any of (case-insensitive) “Yes”, “Y”, “True”, or “T”, then the hook will simply print what it would have sent, but not attempt to contact the Buildbot master.buildbot_send_branch_name
(optional, defaults to not sending the branch name) If your Buildbot’s bzr source build step uses a repourl, do not turn this on. If your buildbot’s bzr build step uses a baseURL, then you may set this value to any of (case-insensitive) “Yes”, “Y”, “True”, or “T” to have the Buildbot master append the branch name to the baseURL.
Note
The bzr smart server (as of version 2.2.2) doesn’t know how to resolve bzr://
urls into absolute paths so any paths in locations.conf
won’t match, hence no change notifications will be sent to Buildbot.
Setting configuration parameters globally or in-branch might still work.
When Buildbot no longer has a hardcoded password, it will be a configuration option here as well.
Here’s a simple example that you might have in your ~/.bazaar/locations.conf
.
[chroot-*:///var/local/myrepo/mybranch]
buildbot_on = change
buildbot_server = localhost
2.5.3.6. P4Source¶
The P4Source
periodically polls a Perforce depot for changes.
It accepts the following arguments:
p4port
- The Perforce server to connect to (as
host:port
). p4user
- The Perforce user.
p4passwd
- The Perforce password.
p4base
- The base depot path to watch, without the trailing ‘/…’.
p4bin
- An optional string parameter. Specify the location of the perforce command line binary (p4). You only need to do this if the perforce binary is not in the path of the Buildbot user. Defaults to p4.
split_file
- A function that maps a pathname, without the leading
p4base
, to a (branch, filename) tuple. The default just returns(None, branchfile)
, which effectively disables branch support. You should supply a function which understands your repository structure. pollInterval
- How often to poll, in seconds. Defaults to 600 (10 minutes).
project
- Set the name of the project to be used for the
P4Source
. This will then be set in any changes generated by theP4Source
, and can be used in a Change Filter for triggering particular builders. pollAtLaunch
- Determines when the first poll occurs. True = immediately on launch, False = wait for one pollInterval (default).
histmax
- The maximum number of changes to inspect at a time. If more than this number occur since the last poll, older changes will be silently ignored.
encoding
- The character encoding of
p4
’s output. This defaults to “utf8”, but if your commit messages are in another encoding, specify that here. For example, if you’re using Perforce on Windows, you may need to use “cp437” as the encoding if “utf8” generates errors in your master log. server_tz
- The timezone of the Perforce server, using the usual timezone format (e.g:
"Europe/Stockholm"
) in case it’s not in UTC. use_tickets
- Set to
True
to use ticket-based authentication, instead of passwords (but you still need to specifyp4passwd
). ticket_login_interval
- How often to get a new ticket, in seconds, when
use_tickets
is enabled. Defaults to 86400 (24 hours). revlink
- A function that maps branch and revision to a valid url (e.g. p4web), stored along with the change. This function must be a callable which takes two arguments, the branch and the revision. Defaults to lambda branch, revision: (u’‘)
resolvewho
- A function that resolves the Perforce ‘user@workspace’ into a more verbose form, stored as the author of the change. Useful when usernames do not match email addresses and external, client-side lookup is required. This function must be a callable which takes one argument. Defaults to lambda who: (who)
Example #1¶
This configuration uses the P4PORT
, P4USER
, and P4PASSWD
specified in the buildmaster’s environment.
It watches a project in which the branch name is simply the next path component, and the file is all path components after.
from buildbot.plugins import changes
s = changes.P4Source(p4base='//depot/project/',
split_file=lambda branchfile: branchfile.split('/',1))
c['change_source'] = s
Example #2¶
Similar to the previous example but also resolves the branch and revision into a valid revlink.
from buildbot.plugins import changes
s = changes.P4Source(p4base='//depot/project/',
split_file=lambda branchfile: branchfile.split('/',1))
revlink=lambda branch, revision: 'http://p4web:8080/@md=d&@/{}?ac=10'.format(revision)
c['change_source'] = s
2.5.3.7. SVNPoller¶
-
class
buildbot.changes.svnpoller.
SVNPoller
¶
The SVNPoller
is a ChangeSource which periodically polls a Subversion repository for new revisions, by running the svn log
command in a subshell.
It can watch a single branch or multiple branches.
SVNPoller
accepts the following arguments:
repourl
The base URL path to watch, like
svn://svn.twistedmatrix.com/svn/Twisted/trunk
, orhttp://divmod.org/svn/Divmo/
, or evenfile:///home/svn/Repository/ProjectA/branches/1.5/
. This must include the access scheme, the location of the repository (both the hostname for remote ones, and any additional directory names necessary to get to the repository), and the sub-path within the repository’s virtual filesystem for the project and branch of interest.The
SVNPoller
will only pay attention to files inside the subdirectory specified by the complete repourl.split_file
A function to convert pathnames into
(branch, relative_pathname)
tuples. Use this to explain your repository’s branch-naming policy toSVNPoller
. This function must accept a single string (the pathname relative to the repository) and return a two-entry tuple. Directory pathnames always end with a right slash to distinguish them from files, liketrunk/src/
, orsrc/
. There are a few utility functions inbuildbot.changes.svnpoller
that can be used as asplit_file
function; see below for details.For directories, the relative pathname returned by
split_file
should end with a right slash but an empty string is also accepted for the root, like("branches/1.5.x", "")
being converted from"branches/1.5.x/"
.The default value always returns
(None, path)
, which indicates that all files are on the trunk.Subclasses of
SVNPoller
can override thesplit_file
method instead of using thesplit_file=
argument.project
- Set the name of the project to be used for the
SVNPoller
. This will then be set in any changes generated by theSVNPoller
, and can be used in a Change Filter for triggering particular builders. svnuser
- An optional string parameter. If set, the option –user argument will be added to all svn commands. Use this if you have to authenticate to the svn server before you can do svn info or svn log commands. Can be a Secret.
svnpasswd
- Like
svnuser
, this will cause a option –password argument to be passed to all svn commands. Can be a Secret. pollInterval
- How often to poll, in seconds. Defaults to 600 (checking once every 10 minutes). Lower this if you want the Buildbot to notice changes faster, raise it if you want to reduce the network and CPU load on your svn server. Please be considerate of public SVN repositories by using a large interval when polling them.
pollAtLaunch
- Determines when the first poll occurs. True = immediately on launch, False = wait for one pollInterval (default).
histmax
- The maximum number of changes to inspect at a time.
Every
pollInterval
seconds, theSVNPoller
asks for the lasthistmax
changes and looks through them for any revisions it does not already know about. If more thanhistmax
revisions have been committed since the last poll, older changes will be silently ignored. Larger values ofhistmax
will cause more time and memory to be consumed on each poll attempt.histmax
defaults to 100. svnbin
- This controls the svn executable to use.
If subversion is installed in a weird place on your system (outside of the buildmaster’s
PATH
), use this to tellSVNPoller
where to find it. The default value of svn will almost always be sufficient. revlinktmpl
- This parameter is deprecated in favour of specifying a global revlink option.
This parameter allows a link to be provided for each revision (for example, to websvn or viewvc).
These links appear anywhere changes are shown, such as on build or change pages.
The proper form for this parameter is an URL with the portion that will substitute for a revision number replaced by ‘’%s’‘.
For example,
'http://myserver/websvn/revision.php?rev=%s'
could be used to cause revision links to be created to a websvn repository viewer. cachepath
- If specified, this is a pathname of a cache file that
SVNPoller
will use to store its state between restarts of the master. extra_args
- If specified, the extra arguments will be added to the svn command args.
Several split file functions are available for common SVN repository layouts.
For a poller that is only monitoring trunk, the default split file function is available explicitly as split_file_alwaystrunk
:
from buildbot.plugins import changes, util
c['change_source'] = changes.SVNPoller(
repourl="svn://svn.twistedmatrix.com/svn/Twisted/trunk",
split_file=util.svn.split_file_alwaystrunk)
For repositories with the /trunk
and /branches/BRANCH
layout, split_file_branches
will do the job:
from buildbot.plugins import changes, util
c['change_source'] = changes.SVNPoller(
repourl="https://amanda.svn.sourceforge.net/svnroot/amanda/amanda",
split_file=util.svn.split_file_branches)
When using this splitter the poller will set the project
attribute of any changes to the project
attribute of the poller.
For repositories with the PROJECT/trunk
and PROJECT/branches/BRANCH
layout, split_file_projects_branches
will do the job:
from buildbot.plugins import changes, util
c['change_source'] = changes.SVNPoller(
repourl="https://amanda.svn.sourceforge.net/svnroot/amanda/",
split_file=util.svn.split_file_projects_branches)
When using this splitter the poller will set the project
attribute of any changes to the project determined by the splitter.
The SVNPoller
is highly adaptable to various Subversion layouts.
See Customizing SVNPoller for details and some common scenarios.
2.5.3.8. Bzr Poller¶
If you cannot insert a Bzr hook in the server, you can use the BzrPoller
.
To use it, put master/contrib/bzr_buildbot.py somewhere that your Buildbot configuration can import it.
Even putting it in the same directory as the master.cfg
should work.
Install the poller in the Buildbot configuration as with any other change source.
Minimally, provide a URL that you want to poll (bzr://
, bzr+ssh://
, or lp:
), making sure the Buildbot user has necessary privileges.
# put bzr_buildbot.py file to the same directory as master.cfg
from bzr_buildbot import BzrPoller
c['change_source'] = BzrPoller(
url='bzr://hostname/my_project',
poll_interval=300)
The BzrPoller
parameters are:
url
- The URL to poll.
poll_interval
- The number of seconds to wait between polls. Defaults to 10 minutes.
branch_name
- Any value to be used as the branch name.
Defaults to None, or specify a string, or specify the constants from bzr_buildbot.py
SHORT
orFULL
to get the short branch name or full branch address. blame_merge_author
- normally, the user that commits the revision is the user that is responsible for the change.
When run in a pqm (Patch Queue Manager, see https://launchpad.net/pqm) environment, the user that commits is the Patch Queue Manager, and the user that committed the merged, parent revision is responsible for the change.
Set this value to
True
if this is pointed against a PQM-managed branch.
2.5.3.9. GitPoller¶
If you cannot take advantage of post-receive hooks as provided by master/contrib/git_buildbot.py for example, then you can use the GitPoller
.
The GitPoller
periodically fetches from a remote Git repository and processes any changes.
It requires its own working directory for operation.
The default should be adequate, but it can be overridden via the workdir
property.
Note
There can only be a single GitPoller pointed at any given repository.
The GitPoller
requires Git-1.7 and later.
It accepts the following arguments:
repourl
- the git-url that describes the remote repository, e.g.
git@example.com:foobaz/myrepo.git
(see the git fetch help for more info on git-url formats) branches
One of the following:
- a list of the branches to fetch. Non-existing branches are ignored.
True
indicating that all branches should be fetched- a callable which takes a single argument.
It should take a remote refspec (such as
'refs/heads/master'
), and return a boolean indicating whether that branch should be fetched.
branch
- accepts a single branch name to fetch. Exists for backwards compatibility with old configurations.
pollInterval
- interval in seconds between polls, default is 10 minutes
pollAtLaunch
- Determines when the first poll occurs. True = immediately on launch, False = wait for one pollInterval (default).
buildPushesWithNoCommits
- Determine if a push on a new branch or update of an already known branch with already known commits should trigger a build. This is useful in case you have build steps depending on the name of the branch and you use topic branches for development. When you merge your topic branch into “master” (for instance), a new build will be triggered. (defaults to False).
gitbin
- path to the Git binary, defaults to just
'git'
category
- Set the category to be used for the changes produced by the
GitPoller
. This will then be set in any changes generated by theGitPoller
, and can be used in a Change Filter for triggering particular builders. project
- Set the name of the project to be used for the
GitPoller
. This will then be set in any changes generated by theGitPoller
, and can be used in a Change Filter for triggering particular builders. usetimestamps
- parse each revision’s commit timestamp (default is
True
), or ignore it in favor of the current time (so recently processed commits appear together in the waterfall page) encoding
- Set encoding will be used to parse author’s name and commit message.
Default encoding is
'utf-8'
. This will not be applied to file names since Git will translate non-ascii file names to unreadable escape sequences. workdir
- the directory where the poller should keep its local repository.
The default is
gitpoller_work
. If this is a relative path, it will be interpreted relative to the master’s basedir. Multiple Git pollers can share the same directory. only_tags
- Determines if the GitPoller should poll for new tags in the git repository.
sshPrivateKey
(optional)- Specifies private SSH key for git to use. This may be either a Secret or just a string. This option requires Git-2.3 or later. The master must either have the host in the known hosts file or the host key must be specified via the sshHostKey option.
sshHostKey
(optional)- Specifies public host key to match when authenticating with SSH public key authentication. This may be either a Secret or just a string. sshPrivateKey must be specified in order to use this option. The host key must be in the form of <key type> <base64-encoded string>, e.g. ssh-rsa AAAAB3N<…>FAaQ==.
sshKnownHosts
(optional)- Specifies the contents of the SSH known_hosts file to match when authenticating with SSH public key authentication. This may be either a Secret or just a string. sshPrivateKey must be specified in order to use this option. sshHostKey must not be specified in order to use this option.
A configuration for the Git poller might look like this:
from buildbot.plugins import changes
c['change_source'] = changes.GitPoller(repourl='git@example.com:foobaz/myrepo.git',
branches=['master', 'great_new_feature'])
2.5.3.10. HgPoller¶
The HgPoller
periodically pulls a named branch from a remote Mercurial repository and processes any changes.
It requires its own working directory for operation, which must be specified via the workdir
property.
The HgPoller
requires a working hg
executable, and at least a read-only access to the repository it polls (possibly through ssh keys or by tweaking the hgrc
of the system user Buildbot runs as).
The HgPoller
will not transmit any change if there are several heads on the watched named branch.
This is similar (although not identical) to the Mercurial executable behaviour.
This exceptional condition is usually the result of a developer mistake, and usually does not last for long.
It is reported in logs.
If fixed by a later merge, the buildmaster administrator does not have anything to do: that merge will be transmitted, together with the intermediate ones.
The HgPoller
accepts the following arguments:
name
- the name of the poller.
This must be unique, and defaults to the
repourl
. repourl
- the url that describes the remote repository, e.g.
http://hg.example.com/projects/myrepo
. Any url suitable forhg pull
can be specified. bookmarks
- a list of the bookmarks to monitor.
branches
- a list of the branches to monitor; defaults to
['default']
. branch
- the desired branch to pull. Exists for backwards compatibility with old configurations.
workdir
the directory where the poller should keep its local repository. It is mandatory for now, although later releases may provide a meaningful default.
It also serves to identify the poller in the buildmaster internal database. Changing it may result in re-processing all changes so far.
Several
HgPoller
instances may share the sameworkdir
for mutualisation of the common history between two different branches, thus easing on local and remote system resources and bandwidth.If relative, the
workdir
will be interpreted from the master directory.pollInterval
- interval in seconds between polls, default is 10 minutes
pollAtLaunch
- Determines when the first poll occurs. True = immediately on launch, False = wait for one pollInterval (default).
hgbin
- path to the Mercurial binary, defaults to just
'hg'
category
- Set the category to be used for the changes produced by the
HgPoller
. This will then be set in any changes generated by theHgPoller
, and can be used in a Change Filter for triggering particular builders. project
- Set the name of the project to be used for the
HgPoller
. This will then be set in any changes generated by theHgPoller
, and can be used in a Change Filter for triggering particular builders. usetimestamps
- parse each revision’s commit timestamp (default is
True
), or ignore it in favor of the current time (so recently processed commits appear together in the waterfall page) encoding
- Set encoding will be used to parse author’s name and commit message.
Default encoding is
'utf-8'
.
A configuration for the Mercurial poller might look like this:
from buildbot.plugins import changes
c['change_source'] = changes.HgPoller(repourl='http://hg.example.org/projects/myrepo',
branch='great_new_feature',
workdir='hg-myrepo')
2.5.3.11. GitHubPullrequestPoller¶
-
class
buildbot.changes.github.
GitHubPullrequestPoller
¶
This GitHubPullrequestPoller
periodically polls the GitHub API for new or updated pull requests. The author, revision, revlink, branch and files fields in the recorded changes are populated with information extracted from the pull request. This allows to filter for certain changes in files and create a blamelist based on the authors in the GitHub pull request.
The GitHubPullrequestPoller
accepts the following arguments:
owner
- The owner of the GitHub repository. This argument is required.
repo
- The name of the GitHub repository. This argument is required.
branches
- List of branches to accept as base branch (e.g. master). Defaults to None and accepts all branches as base.
pollInterval
- Poll interval between polls in seconds. Default is 10 minutes.
pollAtLaunch
- Whether to poll on startup of the buildbot master. Default is False and first poll will occur pollInterval seconds after the master start.
category
- Set the category to be used for the changes produced by the
GitHubPullrequestPoller
. This will then be set in any changes generated by theGitHubPullrequestPoller
, and can be used in a Change Filter for triggering particular builders. baseURL
- GitHub API endpoint. Default is
https://api.github.com
. pullrequest_filter
- A callable which takes a dict which contains the decoded JSON object of the GitHub pull request as argument. All fields specified by the GitHub API are accessible. If the callable returns False the pull request is ignored. Default is True which does not filter any pull requests.
token
- A GitHub API token to execute all requests to the API authenticated. It is strongly recommended to use a API token since it increases GitHub API rate limits significantly.
repository_type
- Set which type of repository link will be in the repository property. Possible values
https
,svn
,git
orsvn
. This link can then be used in a Source Step to checkout the source. magic_link
- Set to True if the changes should contain
refs/pulls/<PR #>/merge
in the branch property and a link to the base repository in the repository property. These properties can be used by theGitHub
source to pull from the special branch in the base repository. Default is False. github_property_whitelist
- A list of
fnmatch
expressions which match against the flattened pull request information JSON prefixed withgithub
. For examplegithub.number
represents the pull request number. Available entries can be looked up in the GitHub API Documentation or by examining the data returned for a pull request by the API.
2.5.3.12. BitbucketPullrequestPoller¶
-
class
buildbot.changes.bitbucket.
BitbucketPullrequestPoller
¶
This BitbucketPullrequestPoller
periodically polls Bitbucket for new or updated pull requests.
It uses Bitbuckets powerful Pull Request REST API to gather the information needed.
The BitbucketPullrequestPoller
accepts the following arguments:
owner
- The owner of the Bitbucket repository.
All Bitbucket Urls are of the form
https://bitbucket.org/owner/slug/
. slug
- The name of the Bitbucket repository.
branch
- A single branch or a list of branches which should be processed.
If it is
None
(the default) all pull requests are used. pollInterval
- Interval in seconds between polls, default is 10 minutes.
pollAtLaunch
- Determines when the first poll occurs.
True
= immediately on launch,False
= wait for onepollInterval
(default). category
- Set the category to be used for the changes produced by the
BitbucketPullrequestPoller
. This will then be set in any changes generated by theBitbucketPullrequestPoller
, and can be used in a Change Filter for triggering particular builders. project
- Set the name of the project to be used for the
BitbucketPullrequestPoller
. This will then be set in any changes generated by theBitbucketPullrequestPoller
, and can be used in a Change Filter for triggering particular builders. pullrequest_filter
- A callable which takes one parameter, the decoded Python object of the pull request JSON.
If the it returns
False
the pull request is ignored. It can be used to define custom filters based on the content of the pull request. See the Bitbucket documentation for more information about the format of the response. By default the filter always returnsTrue
. usetimestamps
- parse each revision’s commit timestamp (default is
True
), or ignore it in favor of the current time (so recently processed commits appear together in the waterfall page) encoding
- Set encoding will be used to parse author’s name and commit message.
Default encoding is
'utf-8'
.
A minimal configuration for the Bitbucket pull request poller might look like this:
from buildbot.plugins import changes
c['change_source'] = changes.BitbucketPullrequestPoller(
owner='myname',
slug='myrepo',
)
Here is a more complex configuration using a pullrequest_filter
.
The pull request is only processed if at least 3 people have already approved it:
def approve_filter(pr, threshold):
approves = 0
for participant in pr['participants']:
if participant['approved']:
approves = approves + 1
if approves < threshold:
return False
return True
from buildbot.plugins import changes
c['change_source'] = changes.BitbucketPullrequestPoller(
owner='myname',
slug='myrepo',
branch='mybranch',
project='myproject',
pullrequest_filter=lambda pr : approve_filter(pr,3),
pollInterval=600,
)
Warning
Anyone who can create pull requests for the Bitbucket repository can initiate a change, potentially causing the buildmaster to run arbitrary code.
2.5.3.13. GerritChangeSource¶
-
class
buildbot.changes.gerritchangesource.
GerritChangeSource
¶
The GerritChangeSource
class connects to a Gerrit server by its SSH interface and uses its event source mechanism, gerrit stream-events.
Note that the Gerrit event stream is stateless and any events that occur while buildbot is not connected to Gerrit will be lost. See GerritEventLogPoller
for a stateful change source.
The GerritChangeSource
accepts the following arguments:
gerritserver
- the dns or ip that host the Gerrit ssh server
gerritport
- the port of the Gerrit ssh server
username
- the username to use to connect to Gerrit
identity_file
- ssh identity file to for authentication (optional). Pay attention to the ssh passphrase
handled_events
- event to be handled (optional). By default processes patchset-created and ref-updated
get_files
- Populate the files attribute of emitted changes (default False). Buildbot will run an extra query command for each handled event to determine the changed files.
debug
- Print Gerrit event in the log (default False). This allows to debug event content, but will eventually fill your logs with useless Gerrit event logs.
By default this class adds a change to the Buildbot system for each of the following events:
patchset-created
- A change is proposed for review.
Automatic checks like
checkpatch.pl
can be automatically triggered. Beware of what kind of automatic task you trigger. At this point, no trusted human has reviewed the code, and a patch could be specially crafted by an attacker to compromise your workers. ref-updated
- A change has been merged into the repository. Typically, this kind of event can lead to a complete rebuild of the project, and upload binaries to an incremental build results server.
But you can specify how to handle events:
- Any event with change and patchSet will be processed by universal collector by default.
- In case you’ve specified processing function for the given kind of events, all events of this kind will be processed only by this function, bypassing universal collector.
An example:
from buildbot.plugins import changes
class MyGerritChangeSource(changes.GerritChangeSource):
"""Custom GerritChangeSource
"""
def eventReceived_patchset_created(self, properties, event):
"""Handler events without properties
"""
properties = {}
self.addChangeFromEvent(properties, event)
This class will populate the property list of the triggered build with the info received from Gerrit server in JSON format.
Warning
If you selected GerritChangeSource
, you must use Gerrit
source step: the branch
property of the change will be target_branch/change_id
and such a ref cannot be resolved, so the Git
source step would fail.
In case of patchset-created
event, these properties will be:
event.change.branch
- Branch of the Change
event.change.id
- Change’s ID in the Gerrit system (the ChangeId: in commit comments)
event.change.number
- Change’s number in Gerrit system
event.change.owner.email
- Change’s owner email (owner is first uploader)
event.change.owner.name
- Change’s owner name
event.change.project
- Project of the Change
event.change.subject
- Change’s subject
event.change.url
- URL of the Change in the Gerrit’s web interface
event.patchSet.number
- Patchset’s version number
event.patchSet.ref
- Patchset’s Gerrit “virtual branch”
event.patchSet.revision
- Patchset’s Git commit ID
event.patchSet.uploader.email
- Patchset uploader’s email (owner is first uploader)
event.patchSet.uploader.name
- Patchset uploader’s name (owner is first uploader)
event.type
- Event type (
patchset-created
) event.uploader.email
- Patchset uploader’s email
event.uploader.name
- Patchset uploader’s name
In case of ref-updated
event, these properties will be:
event.refUpdate.newRev
- New Git commit ID (after merger)
event.refUpdate.oldRev
- Previous Git commit ID (before merger)
event.refUpdate.project
- Project that was updated
event.refUpdate.refName
- Branch that was updated
event.submitter.email
- Submitter’s email (merger responsible)
event.submitter.name
- Submitter’s name (merger responsible)
event.type
- Event type (
ref-updated
) event.submitter.email
- Submitter’s email (merger responsible)
event.submitter.name
- Submitter’s name (merger responsible)
A configuration for this source might look like:
from buildbot.plugins import changes
c['change_source'] = changes.GerritChangeSource(
"gerrit.example.com",
"gerrit_user",
handled_events=["patchset-created", "change-merged"])
See master/docs/examples/git_gerrit.cfg
or master/docs/examples/repo_gerrit.cfg
in the Buildbot distribution for a full example setup of Git+Gerrit or Repo+Gerrit of GerritChangeSource
.
2.5.3.14. GerritEventLogPoller¶
-
class
buildbot.changes.gerritchangesource.
GerritEventLogPoller
¶
The GerritEventLogPoller
class is similar to GerritChangeSource
but connects to the Gerrit server by its HTTP interface and uses the events-log plugin.
Note that the decision of whether to use GerritEventLogPoller
and GerritChangeSource
will depend on your needs. The trade off is:
GerritChangeSource
is low-overhead and reacts instantaneously to events, but a broken connection to Gerrit will lead to missed changesGerritEventLogPoller
is subject to polling overhead and reacts only at it’s polling rate, but is robust to a broken connection to Gerrit and missed changes will be discovered when a connection is restored.
However, you probably do not want use both at the same time as they do not coordinate and changes will be duplicated in this case.
Note
The GerritEventLogPoller
requires either the txrequest
or the treq
package.
The GerritEventLogPoller
accepts the following arguments:
baseURL
- the HTTP url where to find Gerrit. If the URL of the events-log endpoint for your server is
https://example.com/a/plugins/events-log/events/
then thebaseURL
ishttps://example.com/a
. Note that/a
is included. auth
- a requests authentication configuration.
if Gerrit is configured with
BasicAuth
, then it shall be('login', 'password')
if Gerrit is configured withDigestAuth
, then it shall berequests.auth.HTTPDigestAuth('login', 'password')
from the requests module. However, note that usage ofrequests.auth.HTTPDigestAuth
is incompatible withtreq
. handled_events
- event to be handled (optional). By default processes patchset-created and ref-updated
pollInterval
- interval in seconds between polls, default is 30 seconds
pollAtLaunch
- Determines when the first poll occurs. True = immediately on launch (default), False = wait for one pollInterval.
gitBaseURL
- The git URL where Gerrit is accessible via git+ssh protocol
get_files
- Populate the files attribute of emitted changes (default False). Buildbot will run an extra query command for each handled event to determine the changed files.
debug
- Print Gerrit event in the log (default False). This allows to debug event content, but will eventually fill your logs with useless Gerrit event logs.
The same customization can be done as GerritChangeSource
for handling special events.
2.5.3.15. GerritChangeFilter¶
-
class
buildbot.changes.gerritchangesource.
GerritChangeFilter
¶
GerritChangeFilter
is a ready to use ChangeFilter
you can pass to AnyBranchScheduler
in order to filter changes, to create pre-commit builders or post-commit schedulers.
It has the same api as Change Filter, except it has additional eventtype set of filter (can as well be specified as value, list, regular expression or callable)
An example is following:
from buildbot.plugins import schedulers, util
# this scheduler will create builds when a patch is uploaded to gerrit
# but only if it is uploaded to the "main" branch
schedulers.AnyBranchScheduler(name="main-precommit",
change_filter=util.GerritChangeFilter(branch="main",
eventtype="patchset-created"),
treeStableTimer=15*60,
builderNames=["main-precommit"])
# this scheduler will create builds when a patch is merged in the "main" branch
# for post-commit tests
schedulers.AnyBranchScheduler(name="main-postcommit",
change_filter=util.GerritChangeFilter("main", "ref-updated"),
treeStableTimer=15*60,
builderNames=["main-postcommit"])
2.5.3.16. Change Hooks (HTTP Notifications)¶
Buildbot already provides a web frontend, and that frontend can easily be used to receive HTTP push notifications of commits from services like GitHub. See Change Hooks for more information.
2.5.4. Changes¶
-
class
buildbot.changes.changes.
Change
¶
A Change
is an abstract way Buildbot uses to represent a single change to the source files performed by a developer.
In version control systems that support the notion of atomic check-ins a change represents a changeset or commit.
Instances of Change
have the following attributes.
2.5.4.1. Who¶
Each Change
has a who
attribute, which specifies which developer is responsible for the change.
This is a string which comes from a namespace controlled by the VC repository.
Frequently this means it is a username on the host which runs the repository, but not all VC systems require this.
Each StatusNotifier
will map the who
attribute into something appropriate for their particular means of communication: an email address, an IRC handle, etc.
This who
attribute is also parsed and stored into Buildbot’s database (see User Objects).
Currently, only who
attributes in Changes from git
repositories are translated into user objects, but in the future all incoming Changes will have their who
parsed and stored.
2.5.4.2. Files¶
It also has a list of files
, which are just the tree-relative filenames of any files that were added, deleted, or modified for this Change
.
These filenames are used by the fileIsImportant
function (in the scheduler) to decide whether it is worth triggering a new build or not, e.g. the function could use the following function to only run a build if a C file were checked in:
def has_C_files(change):
for name in change.files:
if name.endswith(".c"):
return True
return False
Certain BuildStep
s can also use the list of changed files to run a more targeted series of tests, e.g. the python_twisted.Trial
step can run just the unit tests that provide coverage for the modified .py files instead of running the full test suite.
2.5.4.3. Comments¶
The Change also has a comments
attribute, which is a string containing any checkin comments.
2.5.4.4. Project¶
The project
attribute of a change or source stamp describes the project to which it corresponds, as a short human-readable string.
This is useful in cases where multiple independent projects are built on the same buildmaster.
In such cases, it can be used to control which builds are scheduled for a given commit, and to limit status displays to only one project.
2.5.4.5. Repository¶
This attribute specifies the repository in which this change occurred. In the case of DVCS’s, this information may be required to check out the committed source code. However, using the repository from a change has security risks: if Buildbot is configured to blindly trust this information, then it may easily be tricked into building arbitrary source code, potentially compromising the workers and the integrity of subsequent builds.
2.5.4.6. Codebase¶
This attribute specifies the codebase to which this change was made.
As described in source stamps section, multiple repositories may contain the same codebase.
A change’s codebase is usually determined by the codebaseGenerator
configuration.
By default the codebase is ‘’; this value is used automatically for single-codebase configurations.
2.5.4.7. Revision¶
Each Change can have a revision
attribute, which describes how to get a tree with a specific state: a tree which includes this Change (and all that came before it) but none that come after it.
If this information is unavailable, the revision
attribute will be None
.
These revisions are provided by the ChangeSource
.
Revisions are always strings.
- CVS
revision
is the seconds since the epoch as an integer.- SVN
revision
is the revision number- Darcs
revision
is a large string, the output of darcs changes --context- Mercurial
revision
is a short string (a hash ID), the output of hg identify- P4
revision
is the transaction number- Git
revision
is a short string (a SHA1 hash), the output of e.g. git rev-parse
2.5.4.8. Branches¶
The Change might also have a branch
attribute.
This indicates that all of the Change’s files are in the same named branch.
The schedulers get to decide whether the branch should be built or not.
For VC systems like CVS, Git, Mercurial and Monotone the branch
name is unrelated to the filename.
(That is, the branch name and the filename inhabit unrelated namespaces.)
For SVN, branches are expressed as subdirectories of the repository, so the file’s repourl
is a combination of some base URL, the branch name, and the filename within the branch.
(In a sense, the branch name and the filename inhabit the same namespace.)
Darcs branches are subdirectories of a base URL just like SVN.
- CVS
- branch=’warner-newfeature’, files=[‘src/foo.c’]
- SVN
- branch=’branches/warner-newfeature’, files=[‘src/foo.c’]
- Darcs
- branch=’warner-newfeature’, files=[‘src/foo.c’]
- Mercurial
- branch=’warner-newfeature’, files=[‘src/foo.c’]
- Git
- branch=’warner-newfeature’, files=[‘src/foo.c’]
- Monotone
- branch=’warner-newfeature’, files=[‘src/foo.c’]
2.5.4.9. Change Properties¶
A Change may have one or more properties attached to it, usually specified through the Force Build form or sendchange
.
Properties are discussed in detail in the Build Properties section.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.5.5. Schedulers¶
Schedulers are responsible for initiating builds on builders.
Some schedulers listen for changes from ChangeSources and generate build sets in response to these changes. Others generate build sets without changes, based on other events in the buildmaster.
2.5.5.1. Configuring Schedulers¶
The schedulers
configuration parameter gives a list of scheduler instances, each of which causes builds to be started on a particular set of Builders.
The two basic scheduler classes you are likely to start with are SingleBranchScheduler
and Periodic
, but you can write a customized subclass to implement more complicated build scheduling.
Scheduler arguments should always be specified by name (as keyword arguments), to allow for future expansion:
sched = SingleBranchScheduler(name="quick", builderNames=['lin', 'win'])
There are several common arguments for schedulers, although not all are available with all schedulers.
name
- Each Scheduler must have a unique name.
This is used in status displays, and is also available in the build property
scheduler
.
builderNames
This is the set of builders which this scheduler should trigger, specified as a list of names (strings). This can also be an
IRenderable
object which will render to a list of builder names (or a list ofIRenderable
that will render to builder names).Note
When
builderNames
is rendered, these additionalProperties
attributes are available:master
- A reference to the
BuildMaster
object that owns this scheduler. This can be used to access the data API. sourcestamps
- The list of sourcestamps that triggered the scheduler.
changes
- The list of changes associated with the sourcestamps.
files
- The list of modified files associated with the changes.
Any property attached to the change(s) that triggered the scheduler will be combined and available when rendering builderNames.
Here is a simple example:
from buildbot.plugins import util, schedulers @util.renderer def builderNames(props): builders = set() for f in props.files: if f.endswith('.rst'): builders.add('check_docs') if f.endswith('.c'): builders.add('check_code') return list(builders) c['schedulers'] = [ schedulers.AnyBranchScheduler( name='all', builderNames=builderNames, ) ]
And a more complex one:
import fnmatch from twisted.internet import defer from buildbot.plugins import util, schedulers @util.renderer @defer.inlineCallbacks def builderNames(props): # If "buildername_pattern" is defined with "buildbot sendchange", # check if the builder name matches it. pattern = props.getProperty('buildername_pattern') # If "builder_tags" is defined with "buildbot sendchange", # only schedule builders that have the specified tags. tags = props.getProperty('builder_tags') builders = [] for b in (yield props.master.data.get(('builders',))): if pattern and not fnmatch.fnmatchcase(b['name'], pattern): continue if tags and not set(tags.split()).issubset(set(b['tags'])): continue builders.append(b['name']) return builders c['schedulers'] = [ schedulers.AnyBranchScheduler( name='matrix', builderNames=builderNames, ) ]
properties
(optional)This is a dictionary specifying properties that will be transmitted to all builds started by this scheduler. The
owner
property may be of particular interest, as its contents (as a list) will be added to the list of “interested users” (Doing Things With Users) for each triggered build. For examplesched = Scheduler(..., properties = { 'owner': ['zorro@example.com', 'silver@example.com'] })
codebases
(optional)Specifies codebase definitions that are used when the scheduler processes data from more than one repository at the same time.
The
codebases
parameter is only used to fill in missing details about a codebases when scheduling a build. For example, when a change to codebaseA
occurs, a scheduler must invent a sourcestamp for codebaseB
. Source steps that specify codebaseB
as their codebase will use the invented timestamp.The parameter does not act as a filter on incoming changes – use a change filter for that purpose.
This parameter can be specified in two forms:
- as a list of strings. This is the Simplest form, use it if no special overrides are needed. In this form, just the names of the codebases are listed.
- as a dictionary of dictionaries. In this form, the per-codebase overrides of repository, branch and revision can be specified.
Each codebase definition dictionary is a dictionary with any of the keys:
repository
,branch
,revision
. The codebase definitions are combined in a dictionary keyed by the name of the codebase.codebases = {'codebase1': {'repository':'....', 'branch':'default', 'revision': None}, 'codebase2': {'repository':'....'} }
fileIsImportant
(optional)- A callable which takes one argument, a Change instance, and returns
True
if the change is worth building, andFalse
if it is not. Unimportant Changes are accumulated until the build is triggered by an important change. The default value ofNone
means that all Changes are important.
change_filter
(optional)- The change filter that will determine which changes are recognized by this scheduler; Change Filters.
Note that this is different from
fileIsImportant
: if the change filter filters out a Change, then it is completely ignored by the scheduler. If a Change is allowed by the change filter, but is deemed unimportant, then it will not cause builds to start, but will be remembered and shown in status displays. The default value ofNone
does not filter any changes at all.
onlyImportant
(optional)- A boolean that, when
True
, only adds important changes to the buildset as specified in thefileIsImportant
callable. This means that unimportant changes are ignored the same way achange_filter
filters changes. This defaults toFalse
and only applies whenfileIsImportant
is given.
reason
(optional)- A string that will be used as the reason for the triggered build. By default it lists the type and name of the scheduler triggering the build.
The remaining subsections represent a catalog of the available scheduler types.
All these schedulers are defined in modules under buildbot.schedulers
, and the docstrings there are the best source of documentation on the arguments taken by each one.
2.5.5.2. Scheduler Resiliency¶
In a multi-master configuration, schedulers with the same name can be configured on multiple masters. Only one instance of the scheduler will be active. If that instance becomes inactive, due to its master being shut down or failing, then another instance will become active after a short delay. This provides resiliency in scheduler configurations, so that schedulers are not a single point of failure in a Buildbot infrastructure.
The Data API and web UI display the master on which each scheduler is running.
There is currently no mechanism to control which master’s scheduler instance becomes active. The behavior is nondeterministic, based on the timing of polling by inactive schedulers. The failover is non-revertive.
2.5.5.3. Change Filters¶
Several schedulers perform filtering on an incoming set of changes.
The filter can most generically be specified as a ChangeFilter
.
Set up a ChangeFilter
like this:
from buildbot.plugins import util
my_filter = util.ChangeFilter(project_re="^baseproduct/.*", branch="devel")
and then add it to a scheduler with the change_filter
parameter:
sch = SomeSchedulerClass(...,
change_filter=my_filter)
There are five attributes of changes on which you can filter:
project
- the project string, as defined by the ChangeSource.
repository
- the repository in which this change occurred.
branch
- the branch on which this change occurred.
Note that ‘trunk’ or ‘master’ is often denoted by
None
. category
- the category, again as defined by the ChangeSource.
codebase
- the change’s codebase.
For each attribute, the filter can look for a single, specific value:
my_filter = util.ChangeFilter(project='myproject')
or accept any of a set of values:
my_filter = util.ChangeFilter(project=['myproject', 'jimsproject'])
or apply a regular expression, using the attribute name with a “_re
” suffix:
my_filter = util.ChangeFilter(category_re='.*deve.*')
# or, to use regular expression flags:
import re
my_filter = util.ChangeFilter(category_re=re.compile('.*deve.*', re.I))
buildbot.www.hooks.github.GitHubEventHandler
has a special
github_distinct
property that can be used to filter whether or not
non-distinct changes should be considered. For example, if a commit is pushed to
a branch that is not being watched and then later pushed to a watched branch, by
default, this will be recorded as two separate Changes. In order to record a
change only the first time the commit appears, you can install a custom
ChangeFilter
like this:
ChangeFilter(filter_fn = lambda c: c.properties.getProperty('github_distinct')
For anything more complicated, define a Python function to recognize the strings you want:
def my_branch_fn(branch):
return branch in branches_to_build and branch not in branches_to_ignore
my_filter = util.ChangeFilter(branch_fn=my_branch_fn)
The special argument filter_fn
can be used to specify a function that is given the entire Change object, and returns a boolean.
The entire set of allowed arguments, then, is
project | project_re | project_fn |
repository | repository_re | repository_fn |
branch | branch_re | branch_fn |
category | category_re | category_fn |
codebase | codebase_re | codebase_fn |
filter_fn |
A Change passes the filter only if all arguments are satisfied. If no filter object is given to a scheduler, then all changes will be built (subject to any other restrictions the scheduler enforces).
2.5.5.4. Scheduler Types¶
The remaining subsections represent a catalog of the available Scheduler types.
All these Schedulers are defined in modules under buildbot.schedulers
, and the docstrings there are the best source of documentation on the arguments taken by each one.
SingleBranchScheduler¶
This is the original and still most popular scheduler class.
It follows exactly one branch, and starts a configurable tree-stable-timer after each change on that branch.
When the timer expires, it starts a build on some set of Builders.
This scheduler accepts a fileIsImportant
function which can be used to ignore some Changes if they do not affect any important files.
If treeStableTimer
is not set, then this scheduler starts a build for every Change that matches its change_filter
and statsfies fileIsImportant
.
If treeStableTimer
is set, then a build is triggered for each set of Changes which arrive within the configured time, and match the filters.
Note
The behavior of this scheduler is undefined, if treeStableTimer
is set, and changes from multiple branches, repositories or codebases are accepted by the filter.
Note
The codebases
argument will filter out codebases not specified there, but won’t filter based on the branches specified there.
The arguments to this scheduler are:
name
- See name scheduler argument.
builderNames
- See builderNames scheduler argument.
properties
(optional)- See properties scheduler argument.
codebases
(optional):- See codebases scheduler argument.
fileIsImportant
(optional)- See fileIsImportant scheduler argument.
change_filter
(optional)- See change_filter scheduler argument.
onlyImportant
(optional)- See onlyImportant scheduler argument.
reason
(optional)- See reason scheduler argument.
treeStableTimer
The scheduler will wait for this many seconds before starting the build. If new changes are made during this interval, the timer will be restarted, so really the build will be started after a change and then after this many seconds of inactivity.
If
treeStableTimer
isNone
, then a separate build is started immediately for each Change.categories
(deprecated; use change_filter)- A list of categories of changes that this scheduler will respond to. If this is specified, then any non-matching changes are ignored.
branch
(deprecated; use change_filter)The scheduler will pay attention to this branch, ignoring Changes that occur on other branches. Setting
branch
equal to the special value ofNone
means it should only pay attention to the default branch.Note
None
is a keyword, not a string, so writeNone
and not"None"
.
Example:
from buildbot.plugins import schedulers, util
quick = schedulers.SingleBranchScheduler(
name="quick",
change_filter=util.ChangeFilter(branch='master'),
treeStableTimer=60,
builderNames=["quick-linux", "quick-netbsd"])
full = schedulers.SingleBranchScheduler(
name="full",
change_filter=util.ChangeFilter(branch='master'),
treeStableTimer=5*60,
builderNames=["full-linux", "full-netbsd", "full-OSX"])
c['schedulers'] = [quick, full]
In this example, the two quick builders are triggered 60 seconds after the tree has been changed. The full builds do not run quite so quickly (they wait 5 minutes), so hopefully if the quick builds fail due to a missing file or really simple typo, the developer can discover and fix the problem before the full builds are started. Both schedulers only pay attention to the default branch: any changes on other branches are ignored. Each scheduler triggers a different set of Builders, referenced by name.
Note
The old names for this scheduler, buildbot.scheduler.Scheduler
and buildbot.schedulers.basic.Scheduler
, are deprecated in favor of using buildbot.plugins
:
from buildbot.plugins import schedulers
However if you must use a fully qualified name, it is buildbot.schedulers.basic.SingleBranchScheduler
.
AnyBranchScheduler¶
This scheduler uses a tree-stable-timer like the default one, but uses a separate timer for each branch.
If treeStableTimer
is not set, then this scheduler is indistinguishable from SingleBranchScheduler
.
If treeStableTimer
is set, then a build is triggered for each set of Changes which arrive within the configured time, and match the filters.
The arguments to this scheduler are:
name
- See name scheduler argument.
builderNames
- See builderNames scheduler argument.
properties
(optional)- See properties scheduler argument.
codebases
(optional):- See codebases scheduler argument.
fileIsImportant
(optional)- See fileIsImportant scheduler argument.
change_filter
(optional)- See change_filter scheduler argument.
onlyImportant
(optional)- See onlyImportant scheduler argument.
reason
(optional)- See reason scheduler argument.
treeStableTimer
- The scheduler will wait for this many seconds before starting the build. If new changes are made on the same branch during this interval, the timer will be restarted.
branches
(deprecated; use change_filter)- Changes on branches not specified on this list will be ignored.
categories
(deprecated; use change_filter)- A list of categories of changes that this scheduler will respond to. If this is specified, then any non-matching changes are ignored.
Dependent Scheduler¶
It is common to wind up with one kind of build which should only be performed if the same source code was successfully handled by some other kind of build first. An example might be a packaging step: you might only want to produce .deb or RPM packages from a tree that was known to compile successfully and pass all unit tests. You could put the packaging step in the same Build as the compile and testing steps, but there might be other reasons to not do this (in particular you might have several Builders worth of compiles/tests, but only wish to do the packaging once). Another example is if you want to skip the full builds after a failing quick build of the same source code. Or, if one Build creates a product (like a compiled library) that is used by some other Builder, you’d want to make sure the consuming Build is run after the producing one.
You can use dependencies to express this relationship to the Buildbot.
There is a special kind of scheduler named Dependent
that will watch an upstream scheduler for builds to complete successfully (on all of its Builders).
Each time that happens, the same source code (i.e. the same SourceStamp
) will be used to start a new set of builds, on a different set of Builders.
This downstream scheduler doesn’t pay attention to Changes at all.
It only pays attention to the upstream scheduler.
If the build fails on any of the Builders in the upstream set, the downstream builds will not fire.
Note that, for SourceStamps generated by a Dependent
scheduler, the revision
is None
, meaning HEAD.
If any changes are committed between the time the upstream scheduler begins its build and the time the dependent scheduler begins its build, then those changes will be included in the downstream build.
See the Triggerable
scheduler for a more flexible dependency mechanism that can avoid this problem.
The keyword arguments to this scheduler are:
name
- See name scheduler argument.
builderNames
- See builderNames scheduler argument.
properties
(optional)- See properties scheduler argument.
codebases
(optional):- See codebases scheduler argument.
upstream
- The upstream scheduler to watch. Note that this is an instance, not the name of the scheduler.
Example:
from buildbot.plugins import schedulers
tests = schedulers.SingleBranchScheduler(name="just-tests",
treeStableTimer=5*60,
builderNames=["full-linux",
"full-netbsd",
"full-OSX"])
package = schedulers.Dependent(name="build-package",
upstream=tests, # <- no quotes!
builderNames=["make-tarball", "make-deb",
"make-rpm"])
c['schedulers'] = [tests, package]
Periodic Scheduler¶
This simple scheduler just triggers a build every N seconds.
The arguments to this scheduler are:
name
- See name scheduler argument.
builderNames
- See builderNames scheduler argument.
properties
(optional)- See properties scheduler argument.
codebases
(optional):- See codebases scheduler argument.
fileIsImportant
(optional)- See fileIsImportant scheduler argument.
change_filter
(optional)- See change_filter scheduler argument.
onlyImportant
(optional)- See onlyImportant scheduler argument.
reason
(optional)- See reason scheduler argument.
createAbsoluteSourceStamps
(optional)- This option only has effect when using multiple codebases.
When
True
, it uses the last seen revision for each codebase that does not have a change. WhenFalse
(the default), codebases without changes will use the revision from thecodebases
argument. onlyIfChanged
(optional)- If this is
True
, then builds will not be scheduled at the designated time unless the specified branch has seen an important change since the previous build. By default this setting isFalse
. periodicBuildTimer
- The time, in seconds, after which to start a build.
Example:
from buildbot.plugins import schedulers
nightly = schedulers.Periodic(name="daily",
builderNames=["full-solaris"],
periodicBuildTimer=24*60*60)
c['schedulers'] = [nightly]
The scheduler in this example just runs the full solaris build once per day. Note that this scheduler only lets you control the time between builds, not the absolute time-of-day of each Build, so this could easily wind up an evening or every afternoon scheduler depending upon when it was first activated.
Nightly Scheduler¶
This is highly configurable periodic build scheduler, which triggers a build at particular times of day, week, month, or year.
The configuration syntax is very similar to the well-known crontab
format, in which you provide values for minute, hour, day, and month (some of which can be wildcards), and a build is triggered whenever the current time matches the given constraints.
This can run a build every night, every morning, every weekend, alternate Thursdays, on your boss’s birthday, etc.
Pass some subset of minute
, hour
, dayOfMonth
, month
, and dayOfWeek
; each may be a single number or a list of valid values.
The builds will be triggered whenever the current time matches these values.
Wildcards are represented by a ‘*’ string.
All fields default to a wildcard except ‘minute’, so with no fields this defaults to a build every hour, on the hour.
The full list of parameters is:
name
- See name scheduler argument.
builderNames
- See builderNames scheduler argument.
properties
(optional)- See properties scheduler argument.
codebases
(optional):- See codebases scheduler argument.
fileIsImportant
(optional)- See fileIsImportant scheduler argument.
change_filter
(optional)- See change_filter scheduler argument.
onlyImportant
(optional)- See onlyImportant scheduler argument.
reason
(optional)- See reason scheduler argument.
createAbsoluteSourceStamps
(optional)- This option only has effect when using multiple codebases.
When
True
, it uses the last seen revision for each codebase that does not have a change. WhenFalse
(the default), codebases without changes will use the revision from thecodebases
argument. onlyIfChanged
(optional)- If this is
True
, then builds will not be scheduled at the designated time unless the change filter has accepted an important change since the previous build. The default of this value isFalse
. branch
(optional)- (deprecated; use
change_filter
andcodebases
) The branch to build when the time comes, and the branch to filter for ifchange_filter
is not specified. Remember that a value ofNone
here means the default branch, and will not match other branches! minute
(optional)- The minute of the hour on which to start the build. This defaults to 0, meaning an hourly build.
hour
(optional)- The hour of the day on which to start the build, in 24-hour notation. This defaults to *, meaning every hour.
dayOfMonth
(optional)- The day of the month to start a build.
This defaults to
*
, meaning every day. month
(optional)- The month in which to start the build, with January = 1.
This defaults to
*
, meaning every month. dayOfWeek
(optional)- The day of the week to start a build, with Monday = 0.
This defaults to
*
, meaning every day of the week.
For example, the following master.cfg
clause will cause a build to be started every night at 3:00am:
from buildbot.plugins import schedulers
c['schedulers'].append(
schedulers.Nightly(name='nightly',
branch='master',
builderNames=['builder1', 'builder2'],
hour=3, minute=0))
This scheduler will perform a build each Monday morning at 6:23am and again at 8:23am, but only if someone has committed code in the interim:
c['schedulers'].append(
schedulers.Nightly(name='BeforeWork',
branch=`default`,
builderNames=['builder1'],
dayOfWeek=0, hour=[6,8], minute=23,
onlyIfChanged=True))
The following runs a build every two hours, using Python’s range
function:
c.schedulers.append(
timed.Nightly(name='every2hours',
branch=None, # default branch
builderNames=['builder1'],
hour=range(0, 24, 2)))
Finally, this example will run only on December 24th:
c['schedulers'].append(
timed.Nightly(name='SleighPreflightCheck',
branch=None, # default branch
builderNames=['flying_circuits', 'radar'],
month=12,
dayOfMonth=24,
hour=12,
minute=0))
Try Schedulers¶
This scheduler allows developers to use the buildbot try command to trigger builds of code they have not yet committed.
See try
for complete details.
Two implementations are available: Try_Jobdir
and Try_Userpass
.
The former monitors a job directory, specified by the jobdir
parameter, while the latter listens for PB connections on a specific port
, and authenticates against userport
.
The buildmaster must have a scheduler instance in the config file’s schedulers
list to receive try requests.
This lets the administrator control who may initiate these trial builds, which branches are eligible for trial builds, and which Builders should be used for them.
The scheduler has various means to accept build requests. All of them enforce more security than the usual buildmaster ports do. Any source code being built can be used to compromise the worker accounts, but in general that code must be checked out from the VC repository first, so only people with commit privileges can get control of the workers. The usual force-build control channels can waste worker time but do not allow arbitrary commands to be executed by people who don’t have those commit privileges. However, the source code patch that is provided with the trial build does not have to go through the VC system first, so it is important to make sure these builds cannot be abused by a non-committer to acquire as much control over the workers as a committer has. Ideally, only developers who have commit access to the VC repository would be able to start trial builds, but unfortunately the buildmaster does not, in general, have access to VC system’s user list.
As a result, the try scheduler requires a bit more configuration. There are currently two ways to set this up:
jobdir
(ssh)This approach creates a command queue directory, called the
jobdir
, in the buildmaster’s working directory. The buildmaster admin sets the ownership and permissions of this directory to only grant write access to the desired set of developers, all of whom must have accounts on the machine. The buildbot try command creates a special file containing the source stamp information and drops it in the jobdir, just like a standard maildir. When the buildmaster notices the new file, it unpacks the information inside and starts the builds.The config file entries used by ‘buildbot try’ either specify a local queuedir (for which write and mv are used) or a remote one (using scp and ssh).
The advantage of this scheme is that it is quite secure, the disadvantage is that it requires fiddling outside the buildmaster config (to set the permissions on the jobdir correctly). If the buildmaster machine happens to also house the VC repository, then it can be fairly easy to keep the VC userlist in sync with the trial-build userlist. If they are on different machines, this will be much more of a hassle. It may also involve granting developer accounts on a machine that would not otherwise require them.
To implement this, the worker invokes
ssh -l username host buildbot tryserver ARGS
, passing the patch contents over stdin. The arguments must include the inlet directory and the revision information.user+password
(PB)In this approach, each developer gets a username/password pair, which are all listed in the buildmaster’s configuration file. When the developer runs buildbot try, their machine connects to the buildmaster via PB and authenticates themselves using that username and password, then sends a PB command to start the trial build.
The advantage of this scheme is that the entire configuration is performed inside the buildmaster’s config file. The disadvantages are that it is less secure (while the cred authentication system does not expose the password in plaintext over the wire, it does not offer most of the other security properties that SSH does). In addition, the buildmaster admin is responsible for maintaining the username/password list, adding and deleting entries as developers come and go.
For example, to set up the jobdir style of trial build, using a command queue directory of MASTERDIR/jobdir
(and assuming that all your project developers were members of the developers
unix group), you would first set up that directory:
mkdir -p MASTERDIR/jobdir MASTERDIR/jobdir/new MASTERDIR/jobdir/cur MASTERDIR/jobdir/tmp
chgrp developers MASTERDIR/jobdir MASTERDIR/jobdir/*
chmod g+rwx,o-rwx MASTERDIR/jobdir MASTERDIR/jobdir/*
and then use the following scheduler in the buildmaster’s config file:
from buildbot.plugins import schedulers
s = schedulers.Try_Jobdir(name="try1",
builderNames=["full-linux", "full-netbsd",
"full-OSX"],
jobdir="jobdir")
c['schedulers'] = [s]
Note that you must create the jobdir before telling the buildmaster to use this configuration, otherwise you will get an error.
Also remember that the buildmaster must be able to read and write to the jobdir as well.
Be sure to watch the twistd.log
file (Logfiles) as you start using the jobdir, to make sure the buildmaster is happy with it.
Note
Patches in the jobdir are encoded using netstrings, which place an arbitrary upper limit on patch size of 99999 bytes. If your submitted try jobs are rejected with BadJobfile, try increasing this limit with a snippet like this in your master.cfg:
from twisted.protocols.basic import NetstringReceiver
NetstringReceiver.MAX_LENGTH = 1000000
To use the username/password form of authentication, create a Try_Userpass
instance instead.
It takes the same builderNames
argument as the Try_Jobdir
form, but accepts an additional port
argument (to specify the TCP port to listen on) and a userpass
list of username/password pairs to accept.
Remember to use good passwords for this: the security of the worker accounts depends upon it:
from buildbot.plugins import schedulers
s = schedulers.Try_Userpass(name="try2",
builderNames=["full-linux", "full-netbsd",
"full-OSX"],
port=8031,
userpass=[("alice","pw1"), ("bob", "pw2")])
c['schedulers'] = [s]
Like most places in the buildbot, the port
argument takes a strports specification.
See twisted.application.strports
for details.
Triggerable Scheduler¶
The Triggerable
scheduler waits to be triggered by a Trigger
step (see Triggering Schedulers) in another build.
That step can optionally wait for the scheduler’s builds to complete.
This provides two advantages over Dependent
schedulers.
First, the same scheduler can be triggered from multiple builds.
Second, the ability to wait for Triggerable
’s builds to complete provides a form of “subroutine call”, where one or more builds can “call” a scheduler to perform some work for them, perhaps on other workers.
The Triggerable
scheduler supports multiple codebases.
The scheduler filters out all codebases from Trigger
steps that are not configured in the scheduler.
The parameters are just the basics:
name
- See name scheduler argument.
builderNames
- See builderNames scheduler argument.
properties
(optional)- See properties scheduler argument.
codebases
(optional):- See codebases scheduler argument.
reason
(optional)- See reason scheduler argument.
This class is only useful in conjunction with the Trigger
step.
Here is a fully-worked example:
from buildbot.plugins import schedulers, util, steps
checkin = schedulers.SingleBranchScheduler(name="checkin",
branch=None,
treeStableTimer=5*60,
builderNames=["checkin"])
nightly = schedulers.Nightly(name='nightly',
branch=None,
builderNames=['nightly'],
hour=3, minute=0)
mktarball = schedulers.Triggerable(name="mktarball", builderNames=["mktarball"])
build = schedulers.Triggerable(name="build-all-platforms",
builderNames=["build-all-platforms"])
test = schedulers.Triggerable(name="distributed-test",
builderNames=["distributed-test"])
package = schedulers.Triggerable(name="package-all-platforms",
builderNames=["package-all-platforms"])
c['schedulers'] = [mktarball, checkin, nightly, build, test, package]
# on checkin, make a tarball, build it, and test it
checkin_factory = util.BuildFactory()
checkin_factory.addStep(steps.Trigger(schedulerNames=['mktarball'],
waitForFinish=True))
checkin_factory.addStep(steps.Trigger(schedulerNames=['build-all-platforms'],
waitForFinish=True))
checkin_factory.addStep(steps.Trigger(schedulerNames=['distributed-test'],
waitForFinish=True))
# and every night, make a tarball, build it, and package it
nightly_factory = util.BuildFactory()
nightly_factory.addStep(steps.Trigger(schedulerNames=['mktarball'],
waitForFinish=True))
nightly_factory.addStep(steps.Trigger(schedulerNames=['build-all-platforms'],
waitForFinish=True))
nightly_factory.addStep(steps.Trigger(schedulerNames=['package-all-platforms'],
waitForFinish=True))
NightlyTriggerable Scheduler¶
-
class
buildbot.schedulers.timed.
NightlyTriggerable
¶
The NightlyTriggerable
scheduler is a mix of the Nightly
and Triggerable
schedulers.
This scheduler triggers builds at a particular time of day, week, or year, exactly as the Nightly
scheduler.
However, the source stamp set that is used that provided by the last Trigger
step that targeted this scheduler.
The parameters are just the basics:
name
- See name scheduler argument.
builderNames
- See builderNames scheduler argument.
properties
(optional)- See properties scheduler argument.
codebases
(optional)- See codebases scheduler argument.
reason
(optional)- See reason scheduler argument.
minute
(optional)- See
Nightly
. hour
(optional)- See
Nightly
. dayOfMonth
(optional)- See
Nightly
. month
(optional)- See
Nightly
. dayOfWeek
(optional)- See
Nightly
.
This class is only useful in conjunction with the Trigger
step.
Note that waitForFinish
is ignored by Trigger
steps targeting this scheduler.
Here is a fully-worked example:
from buildbot.plugins import schedulers, util, steps
checkin = schedulers.SingleBranchScheduler(name="checkin",
branch=None,
treeStableTimer=5*60,
builderNames=["checkin"])
nightly = schedulers.NightlyTriggerable(name='nightly',
builderNames=['nightly'],
hour=3, minute=0)
c['schedulers'] = [checkin, nightly]
# on checkin, run tests
checkin_factory = util.BuildFactory([
steps.Test(),
steps.Trigger(schedulerNames=['nightly'])
])
# and every night, package the latest successful build
nightly_factory = util.BuildFactory([
steps.ShellCommand(command=['make', 'package'])
])
ForceScheduler Scheduler¶
The ForceScheduler
scheduler is the way you can configure a force build form in the web UI.
In the /#/builders/:builderid
web page, you will see, on the top right of the page, one button for each ForceScheduler
scheduler that was configured for this builder.
If you click on that button, a dialog will let you choose various parameters for requesting a new build.
The Buildbot framework allows you to customize exactly how the build form looks, which builders have a force build form (it might not make sense to force build every builder), and who is allowed to force builds on which builders.
How you do so is by configuring a ForceScheduler
, and add it into the list schedulers
.
The scheduler takes the following parameters:
name
- See name scheduler argument. Force buttons are ordered by this property in the UI (so you can prefix by 01, 02 etc in order to control precisely the order).
builderNames
- List of builders where the force button should appear. See builderNames scheduler argument.
reason
A parameter allowing the user to specify the reason for the build. The default value is a string parameter with a default value “force build”.
reasonString
A string that will be used to create the build reason for the forced build. This string can contain the placeholders%(owner)s
and%(reason)s
, which represents the value typed into the reason field.
username
A parameter specifying the username associated with the build (aka owner). The default value is a username parameter.
codebases
A list of strings or CodebaseParameter specifying the codebases that should be presented. The default is a single codebase with no name (i.e. codebases=[‘’]).
properties
A list of parameters, one for each property. These can be arbitrary parameters, where the parameter’s name is taken as the property name, orAnyPropertyParameter
, which allows the web user to specify the property name. The default value is an empty list.
buttonName
The name of the “submit” button on the resulting force-build form. This defaults to the name of scheduler.
An example may be better than long explanation. What you need in your config file is something like:
from buildbot.plugins import schedulers, util
sch = schedulers.ForceScheduler(
name="force",
buttonName="pushMe!",
label="My nice Force form",
builderNames=["my-builder"],
codebases=[
util.CodebaseParameter(
"",
label="Main repository",
# will generate a combo box
branch=util.ChoiceStringParameter(
name="branch",
choices=["master", "hest"],
default="master"),
# will generate nothing in the form, but revision, repository,
# and project are needed by buildbot scheduling system so we
# need to pass a value ("")
revision=util.FixedParameter(name="revision", default=""),
repository=util.FixedParameter(name="repository", default=""),
project=util.FixedParameter(name="project", default=""),
),
],
# will generate a text input
reason=util.StringParameter(name="reason",
label="reason:",
required=True, size=80),
# in case you don't require authentication this will display
# input for user to type his name
username=util.UserNameParameter(label="your name:",
size=80),
# A completely customized property list. The name of the
# property is the name of the parameter
properties=[
util.NestedParameter(name="options", label="Build Options", layout="vertical", fields=[
util.StringParameter(name="pull_url",
label="optionally give a public Git pull url:",
default="", size=80),
util.BooleanParameter(name="force_build_clean",
label="force a make clean",
default=False)
])
])
This will result in the following UI:
The force scheduler uses the web interface’s authorization framework to determine which user has the right to force which build. Here is an example of code on how you can define which user has which right:
user_mapping = {
re.compile("project1-builder"): ["project1-maintainer", "john"] ,
re.compile("project2-builder"): ["project2-maintainer", "jack"],
re.compile(".*"): ["root"]
}
def force_auth(user, status):
global user_mapping
for r,users in user_mapping.items():
if r.match(status.name):
if user in users:
return True
return False
# use authz_cfg in your WebStatus setup
authz_cfg=authz.Authz(
auth=my_auth,
forceBuild = force_auth,
)
Most of the arguments to ForceScheduler
are “parameters”.
Several classes of parameters are available, each describing a different kind of input from a force-build form.
All parameter types have a few common arguments:
name
(required)
The name of the parameter. For properties, this will correspond to the name of the property that your parameter will set. The name is also used internally as the identifier for in the HTML form.
label
(optional; default is same as name)
The label of the parameter. This is what is displayed to the user.
tablabel
(optional; default is same as label)
The label of the tab if this parameter is included into a tab layout NestedParameter. This is what is displayed to the user.
default
(optional; default: “”)
The default value for the parameter, that is used if there is no user input.
required
(optional; default: False)
If this is true, then an error will be shown to user if there is no input in this field
maxsize
(optional; default: None)
The maximum size of a field (in bytes). Buildbot will ensure the field sent by the user is not too large.
autopopulate
(optional; default: None)
If not None,
autopopulate
is a dictionary which describes how other parameters are updated if this one changes. This is useful for when you have lots of parameters, and defaults depends on e.g. the branch. This is implemented generically, and all parameters can update others. Beware of infinite loops!c['schedulers'].append(schedulers.ForceScheduler( name="custom", builderNames=["runtests"], buttonName="Start Custom Build", codebases = [util.CodebaseParameter( codebase='', project=None, branch=util.ChoiceStringParameter( name="branch", label="Branch", strict=False, choices=["master", "dev"], autopopulate={ 'master': { 'build_name': 'build for master branch', }, 'dev': { 'build_name': 'build for dev branch', } } ))], properties=[ util.StringParameter( name="build_name", label="Name of the Build release.", default="")])) # this parameter will be auto populated when user chooses branch
The parameter types are:
NestedParameter(name="options", label="Build options" layout="vertical", fields=[...]),
This parameter type is a special parameter which contains other parameters. This can be used to group a set of parameters together, and define the layout of your form. You can recursively include NestedParameter into NestedParameter, to build very complex UI.
It adds the following arguments:
layout
(optional, default: “vertical”)
The layout defines how the fields are placed in the form.
The layouts implemented in the standard web application are:
simple
: fields are displayed one by one without alignment.They take the horizontal space that they need.
vertical
: all fields are displayed vertically, aligned in columns (as per thecolumn
attribute of the NestedParameter)
tabs
: Each field gets its own tab.This can be used to declare complex build forms which won’t fit into one screen. The children fields are usually other NestedParameters with vertical layout.
columns
(optional, accepted values are 1,2,3,4)
The number of columns to use for a vertical layout. If omitted, it is set to 1 unless there are more than 3 visible child fields in which case it is set to 2.
FixedParameter(name="branch", default="trunk"),
This parameter type will not be shown on the web form, and always generate a property with its default value.
StringParameter(name="pull_url",
label="optionally give a public Git pull url:",
default="", size=80)
This parameter type will show a single-line text-entry box, and allow the user to enter an arbitrary string. It adds the following arguments:
regex
(optional)
A string that will be compiled as a regex, and used to validate the input of this parameter.
size
(optional; default: 10)
The width of the input field (in characters).
TextParameter(name="comments",
label="comments to be displayed to the user of the built binary",
default="This is a development build", cols=60, rows=5)
This parameter type is similar to StringParameter, except that it is represented in the HTML form as a textarea
, allowing multi-line input.
It adds the StringParameter arguments, this type allows:
cols
(optional; default: 80)
The number of columns thetextarea
will have.
rows
(optional; default: 20)
The number of rows thetextarea
will have
This class could be subclassed in order to have more customization e.g.
- developer could send a list of Git branches to pull from
- developer could send a list of Gerrit changes to cherry-pick,
- developer could send a shell script to amend the build.
Beware of security issues anyway.
IntParameter(name="debug_level",
label="debug level (1-10)", default=2)
This parameter type accepts an integer value using a text-entry box.
BooleanParameter(name="force_build_clean",
label="force a make clean", default=False)
This type represents a boolean value. It will be presented as a checkbox.
UserNameParameter(label="your name:", size=80)
This parameter type accepts a username. If authentication is active, it will use the authenticated user instead of displaying a text-entry box.
size
(optional; default: 10)- The width of the input field (in characters).
need_email
(optional; default True)- If true, require a full email address rather than arbitrary text.
ChoiceStringParameter(name="branch",
choices=["main","devel"], default="main")
This parameter type lets the user choose between several choices (e.g the list of branches you are supporting, or the test campaign to run).
If multiple
is false, then its result is a string - one of the choices.
If multiple
is true, then the result is a list of strings from the choices.
Note that for some use cases, the choices need to be generated dynamically.
This can be done via subclassing and overriding the ‘getChoices’ member function.
An example of this is provided by the source for the InheritBuildParameter
class.
Its arguments, in addition to the common options, are:
choices
The list of available choices.
strict
(optional; default: True)
If true, verify that the user’s input is from the list. Note that this only affects the validation of the form request; even if this argument is False, there is no HTML form component available to enter an arbitrary value.
multiple
If true, then the user may select multiple choices.
Example:
ChoiceStringParameter(name="forced_tests",
label="smoke test campaign to run",
default=default_tests,
multiple=True,
strict=True,
choices=["test_builder1", "test_builder2",
"test_builder3"])
# .. and later base the schedulers to trigger off this property:
# triggers the tests depending on the property forced_test
builder1.factory.addStep(Trigger(name="Trigger tests",
schedulerNames=Property("forced_tests")))
CodebaseParameter(codebase="myrepo")
This is a parameter group to specify a sourcestamp for a given codebase.
codebase
The name of the codebase.
branch
(optional; default: StringParameter)
A parameter specifying the branch to build. The default value is a string parameter.
revision
(optional; default: StringParameter)
A parameter specifying the revision to build. The default value is a string parameter.
repository
(optional; default: StringParameter)
A parameter specifying the repository for the build. The default value is a string parameter.
project
(optional; default: StringParameter)
A parameter specifying the project for the build. The default value is a string parameter.
patch
(optional; default: None)
APatchParameter
specifying that the user can upload a patch for this codebase.
This parameter allows the user to upload a file to a build.
The user can either write some text to a text area, or select a file from the browser.
Note that the file is then stored inside a property, so a maxsize
of 10 megabytes has been set.
You can still override that maxsize
if you wish.
This parameter allows the user to specify a patch to be applied at the source step.
The patch is stored within the sourcestamp, and associated to a codebase.
That is why PatchParameter
must be set inside a CodebaseParameter
.
PatchParameter
is actually a NestedParameter
composed of following fields:
FileParameter('body'),
IntParameter('level', default=1),
StringParameter('author', default=""),
StringParameter('comment', default=""),
StringParameter('subdir', default=".")
You can customize any of these fields by overwriting their field name e.g:
c['schedulers'] = [
schedulers.ForceScheduler(
name="force",
codebases=[util.CodebaseParameter("foo", patch=util.PatchParameter(
body=FileParameter('body', maxsize=10000)))], # override the maximum size of a patch to 10k instead of 10M
builderNames=["testy"])]
Note
InheritBuildParameter is not yet ported to data API, and cannot be used with buildbot nine yet(bug #3521).
This is a special parameter for inheriting force build properties from another build. The user is presented with a list of compatible builds from which to choose, and all forced-build parameters from the selected build are copied into the new build. The new parameter is:
compatible_builds
A function to find compatible builds in the build history. This function is given the masterStatus
instance as first argument, and the current builder name as second argument, or None when forcing all builds.
Example:
def get_compatible_builds(status, builder):
if builder is None: # this is the case for force_build_all
return ["cannot generate build list here"]
# find all successful builds in builder1 and builder2
builds = []
for builder in ["builder1","builder2"]:
builder_status = status.getBuilder(builder)
for num in range(1,40): # 40 last builds
b = builder_status.getBuild(-num)
if not b:
continue
if b.getResults() == FAILURE:
continue
builds.append(builder+"/"+str(b.getNumber()))
return builds
# ...
sched = Scheduler(...,
properties=[
InheritBuildParameter(
name="inherit",
label="promote a build for merge",
compatible_builds=get_compatible_builds,
required = True),
])
Note
WorkerChoiceParameter is not yet ported to data API, and cannot be used with buildbot nine yet(bug #3521).
This parameter allows a scheduler to require that a build is assigned to the chosen worker.
The choice is assigned to the workername property for the build.
The enforceChosenWorker
functor must be assigned to the canStartBuild
parameter for the Builder
.
Example:
from buildbot.plugins import util
# schedulers:
ForceScheduler(
# ...
properties=[
WorkerChoiceParameter(),
]
)
# builders:
BuilderConfig(
# ...
canStartBuild=util.enforceChosenWorker,
)
This parameter type can only be used in properties
, and allows the user to specify both the property name and value in the web form.
This Parameter is here to reimplement old Buildbot behavior, and should be avoided. Stricter parameter name and type should be preferred.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.5.6. Workers¶
The workers
configuration key specifies a list of known workers.
In the common case, each worker is defined by an instance of the buildbot.worker.Worker
class.
It represents a standard, manually started machine that will try to connect to the Buildbot master as a worker.
Buildbot also supports “on-demand”, or latent, workers, which allow Buildbot to dynamically start and stop worker instances.
2.5.6.1. Defining Workers¶
A Worker
instance is created with a workername
and a workerpassword
.
These are the same two values that need to be provided to the worker administrator when they create the worker.
The workername
must be unique, of course.
The password exists to prevent evildoers from interfering with the Buildbot by inserting their own (broken) workers into the system and thus displacing the real ones.
Workers with an unrecognized workername
or a non-matching password will be rejected when they attempt to connect, and a message describing the problem will be written to the log file (see Logfiles).
A configuration for two workers would look like:
from buildbot.plugins import worker
c['workers'] = [
worker.Worker('bot-solaris', 'solarispasswd'),
worker.Worker('bot-bsd', 'bsdpasswd'),
]
2.5.6.2. Worker Options¶
Properties¶
Worker
objects can also be created with an optional properties
argument, a dictionary specifying properties that will be available to any builds performed on this worker.
For example:
c['workers'] = [
worker.Worker('bot-solaris', 'solarispasswd',
properties={ 'os':'solaris' }),
]
Worker
properties have priority over other sources (Builder
, Scheduler
, etc.).
You may use the defaultProperties
parameter that will only be added to Build Properties if they are not already set by another source:
c['workers'] = [
worker.Worker('fast-bot', 'fast-passwd',
defaultProperties={'parallel_make': 10}),
]
Limiting Concurrency¶
The Worker
constructor can also take an optional max_builds
parameter to limit the number of builds that it will execute simultaneously:
c['workers'] = [
worker.Worker('bot-linux', 'linuxpassword',
max_builds=2),
]
Note
In Worker For Builders concept only one build from the same builder would run on the worker.
Master-Worker TCP Keepalive¶
By default, the buildmaster sends a simple, non-blocking message to each worker every hour. These keepalives ensure that traffic is flowing over the underlying TCP connection, allowing the system’s network stack to detect any problems before a build is started.
The interval can be modified by specifying the interval in seconds using the keepalive_interval
parameter of Worker
(defaults to 3600):
c['workers'] = [
worker.Worker('bot-linux', 'linuxpasswd',
keepalive_interval=3600)
]
The interval can be set to None
to disable this functionality altogether.
When Workers Go Missing¶
Sometimes, the workers go away. One very common reason for this is when the worker process is started once (manually) and left running, but then later the machine reboots and the process is not automatically restarted.
If you’d like to have the administrator of the worker (or other people) be notified by email when the worker has been missing for too long, just add the notify_on_missing=
argument to the Worker
definition.
This value can be a single email address, or a list of addresses:
c['workers'] = [
worker.Worker('bot-solaris', 'solarispasswd',
notify_on_missing='bob@example.com')
]
By default, this will send email when the worker has been disconnected for more than one hour.
Only one email per connection-loss event will be sent.
To change the timeout, use missing_timeout=
and give it a number of seconds (the default is 3600).
You can have the buildmaster send email to multiple recipients: just provide a list of addresses instead of a single one:
c['workers'] = [
worker.Worker('bot-solaris', 'solarispasswd',
notify_on_missing=['bob@example.com',
'alice@example.org'],
missing_timeout=300) # notify after 5 minutes
]
The email sent this way will use a MailNotifier
(see MailNotifier
) status target, if one is configured.
This provides a way for you to control the from address of the email, as well as the relayhost (aka smarthost) to use as an SMTP server.
If no MailNotifier
is configured on this buildmaster, the worker-missing emails will be sent using a default configuration.
Note that if you want to have a MailNotifier
for worker-missing emails but not for regular build emails, just create one with builders=[]
, as follows:
from buildbot.plugins import status, worker
m = status.MailNotifier(fromaddr='buildbot@localhost', builders=[],
relayhost='smtp.example.org')
c['reporters'].append(m)
c['workers'] = [
worker.Worker('bot-solaris', 'solarispasswd',
notify_on_missing='bob@example.com')
]
Workers States¶
There are some times when a worker misbehaves because of issues with its configuration. In those cases, you may want to pause the worker, or maybe completely shut it down.
There are three actions that you may take (in the worker’s web page Actions dialog)
- Pause: If a worker is paused, it won’t accept new builds. The action of pausing a worker will not affect any build ongoing.
- Graceful Shutdown: If a worker is in graceful shutdown mode, it won’t accept new builds, but will finish the current builds. When all of its build are finished, the buildbot-worker process will terminate.
- Force Shutdown: If a worker is in force shutdown mode, it will terminate immediately, and the build he was currently doing will be put to retry state.
Those actions will put the worker in two states
- paused: the worker is paused if it is connected but doesn’t accept new builds.
- graceful: the worker is graceful if it doesn’t accept new builds, and will shutdown when builds are finished.
A worker might be put to paused
state automatically if buildbot detects a misbehavior.
This is called the quarantine timer.
Quarantine timer is an exponential back-off mechanism for workers.
This avoids a misbehaving worker to eat the build queue by quickly finishing builds in EXCEPTION
state.
When misbehavior is detected, the timer will pause the worker for 10 second, and then that time will double at each misbehavior detection, until the worker finishes a build.
The first case of misbehavior is for a latent worker to not start properly.
The second case of misbehavior is for a build to end with an EXCEPTION
status.
Worker states are stored in the database, can be queried via REST API and visible in the UI’s workers page.
2.5.6.3. Local Workers¶
For smaller setups, you may want to just run the workers on the same machine as the master. To simplify the maintenance, you may even want to run them in the same process.
This is what LocalWorker is for.
Instead of configuring a worker.Worker
, you have to configure a worker.LocalWorker
.
As the worker is running on the same process, password is not necessary.
You can run as many local workers as long as your machine CPU and memory is allowing.
A configuration for two workers would look like:
from buildbot.plugins import worker
c['workers'] = [
worker.LocalWorker('bot1'),
worker.LocalWorker('bot2'),
]
In order to use local workers you need to have buildbot-worker
package installed.
2.5.6.4. Latent Workers¶
The standard Buildbot model has workers started manually. The previous section described how to configure the master for this approach.
Another approach is to let the Buildbot master start workers when builds are ready, on-demand. Thanks to services such as Amazon Web Services’ Elastic Compute Cloud (“AWS EC2”), this is relatively easy to set up, and can be very useful for some situations.
The workers that are started on-demand are called “latent” workers. You can find the list of Supported Latent Workers below.
Common Options¶
The following options are available for all latent workers.
build_wait_timeout
- This option allows you to specify how long a latent worker should wait after a build for another build before it shuts down. It defaults to 10 minutes. If this is set to 0 then the worker will be shut down immediately. If it is less than 0 it will be shut down only when shutting down master.
Supported Latent Workers¶
As of time of writing, Buildbot supports the following latent workers:
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
-
class
buildbot.worker.ec2.
EC2LatentWorker
¶
EC2 is a web service that allows you to start virtual machines in an Amazon data center. Please see their website for details, including costs. Using the AWS EC2 latent workers involves getting an EC2 account with AWS and setting up payment; customizing one or more EC2 machine images (“AMIs”) on your desired operating system(s) and publishing them (privately if needed); and configuring the buildbot master to know how to start your customized images for “substantiating” your latent workers.
This document will guide you through setup of a AWS EC2 latent worker:
To start off, to use the AWS EC2 latent worker, you need to get an AWS developer account and sign up for EC2. Although Amazon often changes this process, these instructions should help you get started:
- Go to http://aws.amazon.com/ and click to “Sign Up Now” for an AWS account.
- Once you are logged into your account, you need to sign up for EC2. Instructions for how to do this have changed over time because Amazon changes their website, so the best advice is to hunt for it. After signing up for EC2, it may say it wants you to upload an x.509 cert. You will need this to create images (see below) but it is not technically necessary for the buildbot master configuration.
- You must enter a valid credit card before you will be able to use EC2. Do that under ‘Payment Method’.
- Make sure you’re signed up for EC2 by going to and verifying EC2 is listed.
Now you need to create an AMI and configure the master. You may need to run through this cycle a few times to get it working, but these instructions should get you started.
Creating an AMI is out of the scope of this document. The EC2 Getting Started Guide is a good resource for this task. Here are a few additional hints.
- When an instance of the image starts, it needs to automatically start a buildbot worker that connects to your master (to create a buildbot worker, Creating a worker; to make a daemon, Launching the daemons).
- You may want to make an instance of the buildbot worker, configure it as a standard worker in the master (i.e., not as a latent worker), and test and debug it that way before you turn it into an AMI and convert to a latent worker in the master.
- In order to avoid extra costs in case of master failure, you should configure the worker of the AMI with
maxretries
option (see Worker Options) Also see example systemd unit file example
EC2LatentWorker
¶Now let’s assume you have an AMI that should work with the EC2LatentWorker
.
It’s now time to set up your buildbot master configuration.
You will need some information from your AWS account: the Access Key Id and the Secret Access Key. If you’ve built the AMI yourself, you probably already are familiar with these values. If you have not, and someone has given you access to an AMI, these hints may help you find the necessary values:
- While logged into your AWS account, find the “Access Identifiers” link (either on the left, or via .
- On the page, you’ll see alphanumeric values for “Your Access Key Id:” and “Your Secret Access Key:”.
Make a note of these.
Later on, we’ll call the first one your
identifier
and the second one yoursecret_identifier
.
When creating an EC2LatentWorker
in the buildbot master configuration, the first three arguments are required.
The name and password are the first two arguments, and work the same as with normal workers.
The next argument specifies the type of the EC2 virtual machine (available options as of this writing include m1.small
, m1.large
, m1.xlarge
, c1.medium
, and c1.xlarge
; see the EC2 documentation for descriptions of these machines).
Here is the simplest example of configuring an EC2 latent worker. It specifies all necessary remaining values explicitly in the instantiation.
from buildbot.plugins import worker
c['workers'] = [
worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large',
ami='ami-12345',
identifier='publickey',
secret_identifier='privatekey'
keypair_name='latent_buildbot_worker',
security_name='latent_buildbot_worker',
)
]
The ami
argument specifies the AMI that the master should start.
The identifier
argument specifies the AWS Access Key Id, and the secret_identifier
specifies the AWS Secret Access Key.
Both the AMI and the account information can be specified in alternate ways.
Note
Whoever has your identifier
and secret_identifier
values can request AWS work charged to your account, so these values need to be carefully protected.
Another way to specify these access keys is to put them in a separate file.
Buildbot supports the standard AWS credentials file.
You can then make the access privileges stricter for this separate file, and potentially let more people read your main configuration file.
If your master is running in EC2, you can also use IAM roles for EC2 to delegate permissions.
keypair_name
and security_name
allow you to specify different names for these AWS EC2 values.
You can make an .aws
directory in the home folder of the user running the buildbot master.
In that directory, create a file called credentials
.
The format of the file should be as follows, replacing identifier
and secret_identifier
with the credentials obtained before.
[default]
aws_access_key_id = identifier
aws_secret_access_key = secret_identifier
If you are using IAM roles, no config file is required. Then you can instantiate the worker as follows.
from buildbot.plugins import worker
c['workers'] = [
worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large',
ami='ami-12345',
keypair_name='latent_buildbot_worker',
security_name='latent_buildbot_worker',
)
]
Previous examples used a particular AMI. If the Buildbot master will be deployed in a process-controlled environment, it may be convenient to specify the AMI more flexibly. Rather than specifying an individual AMI, specify one or two AMI filters.
In all cases, the AMI that sorts last by its location (the S3 bucket and manifest name) will be preferred.
One available filter is to specify the acceptable AMI owners, by AWS account number (the 12 digit number, usually rendered in AWS with hyphens like “1234-5678-9012”, should be entered as in integer).
from buildbot.plugins import worker
bot1 = worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large',
valid_ami_owners=[11111111111,
22222222222],
identifier='publickey',
secret_identifier='privatekey',
keypair_name='latent_buildbot_worker',
security_name='latent_buildbot_worker',
)
The other available filter is to provide a regular expression string that will be matched against each AMI’s location (the S3 bucket and manifest name).
from buildbot.plugins import worker
bot1 = worker.EC2LatentWorker(
'bot1', 'sekrit', 'm1.large',
valid_ami_location_regex=r'buildbot\-.*/image.manifest.xml',
identifier='publickey',
secret_identifier='privatekey',
keypair_name='latent_buildbot_worker',
security_name='latent_buildbot_worker',
)
The regular expression can specify a group, which will be preferred for the sorting. Only the first group is used; subsequent groups are ignored.
from buildbot.plugins import worker
bot1 = worker.EC2LatentWorker(
'bot1', 'sekrit', 'm1.large',
valid_ami_location_regex=r'buildbot\-.*\-(.*)/image.manifest.xml',
identifier='publickey',
secret_identifier='privatekey',
keypair_name='latent_buildbot_worker',
security_name='latent_buildbot_worker',
)
If the group can be cast to an integer, it will be. This allows 10 to sort after 1, for instance.
from buildbot.plugins import worker
bot1 = worker.EC2LatentWorker(
'bot1', 'sekrit', 'm1.large',
valid_ami_location_regex=r'buildbot\-.*\-(\d+)/image.manifest.xml',
identifier='publickey',
secret_identifier='privatekey',
keypair_name='latent_buildbot_worker',
security_name='latent_buildbot_worker',
)
In addition to using the password as a handshake between the master and the worker, you may want to use a firewall to assert that only machines from a specific IP can connect as workers.
This is possible with AWS EC2 by using the Elastic IP feature.
To configure, generate a Elastic IP in AWS, and then specify it in your configuration using the elastic_ip
argument.
from buildbot.plugins import worker
c['workers'] = [
worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large',
'ami-12345',
identifier='publickey',
secret_identifier='privatekey',
elastic_ip='208.77.188.166',
keypair_name='latent_buildbot_worker',
security_name='latent_buildbot_worker',
)
]
One other way to configure a worker is by settings AWS tags.
They can for example be used to have a more restrictive security IAM policy.
To get Buildbot to tag the latent worker specify the tag keys and values in your configuration using the tags
argument.
from buildbot.plugins import worker
c['workers'] = [
worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large',
'ami-12345',
identifier='publickey',
secret_identifier='privatekey',
keypair_name='latent_buildbot_worker',
security_name='latent_buildbot_worker',
tags={'SomeTag': 'foo'})
]
If the worker needs access to additional AWS resources, you can also enable your workers to access them via an EC2 instance profile. To use this capability, you must first create an instance profile separately in AWS. Then specify its name on EC2LatentWorker via instance_profile_name.
from buildbot.plugins import worker
c['workers'] = [
worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large',
ami='ami-12345',
keypair_name='latent_buildbot_worker',
security_name='latent_buildbot_worker',
instance_profile_name='my_profile'
)
]
You may also supply your own boto3.Session object to allow for more flexible session options (ex. cross-account)
To use this capability, you must first create a boto3.Session object.
Then provide it to EC2LatentWorker via session
argument.
import boto3
from buildbot.plugins import worker
session = boto3.session.Session()
c['workers'] = [
worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large',
ami='ami-12345',
keypair_name='latent_buildbot_worker',
security_name='latent_buildbot_worker',
session=session
)
]
The EC2LatentWorker
supports all other configuration from the standard Worker
.
The missing_timeout
and notify_on_missing
specify how long to wait for an EC2 instance to attach before considering the attempt to have failed, and email addresses to alert, respectively.
missing_timeout
defaults to 20 minutes.
If you want to attach existing volumes to an ec2 latent worker, use the volumes attribute.
This mechanism can be valuable if you want to maintain state on a conceptual worker across multiple start/terminate sequences.
volumes
expects a list of (volume_id, mount_point) tuples to attempt attaching when your instance has been created.
If you want to attach new ephemeral volumes, use the the block_device_map attribute. This follows the AWS API syntax, essentially acting as a passthrough. The only distinction is that the volumes default to deleting on termination to avoid leaking volume resources when workers are terminated. See boto documentation for further details.
from buildbot.plugins import worker
c['workers'] = [
worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large',
ami='ami-12345',
keypair_name='latent_buildbot_worker',
security_name='latent_buildbot_worker',
block_device_map= [
"DeviceName": "/dev/xvdb",
"Ebs" : {
"VolumeType": "io1",
"Iops": 1000,
"VolumeSize": 100
}
]
)
]
If you are managing workers within a VPC, your worker configuration must be modified from above. You must specify the id of the subnet where you want your worker placed. You must also specify security groups created within your VPC as opposed to classic EC2 security groups. This can be done by passing the ids of the vpc security groups. Note, when using a VPC, you can not specify classic EC2 security groups (as specified by security_name).
from buildbot.plugins import worker
c['workers'] = [
worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large',
ami='ami-12345',
keypair_name='latent_buildbot_worker',
subnet_id='subnet-12345',
security_group_ids=['sg-12345','sg-67890']
)
]
If you would prefer to use spot instances for running your builds, you can accomplish that by passing in a True value to the spot_instance
parameter to the EC2LatentWorker
constructor.
Additionally, you may want to specify max_spot_price
and price_multiplier
in order to limit your builds’ budget consumption.
from buildbot.plugins import worker
c['workers'] = [
worker.EC2LatentWorker('bot1', 'sekrit', 'm1.large',
'ami-12345', region='us-west-2',
identifier='publickey',
secret_identifier='privatekey',
elastic_ip='208.77.188.166',
keypair_name='latent_buildbot_worker',
security_name='latent_buildbot_worker',
placement='b', spot_instance=True,
max_spot_price=0.09,
price_multiplier=1.15,
product_description='Linux/UNIX')
]
This example would attempt to create a m1.large spot instance in the us-west-2b region costing no more than $0.09/hour.
The spot prices for ‘Linux/UNIX’ spot instances in that region over the last 24 hours will be averaged and multiplied by the price_multiplier
parameter, then a spot request will be sent to Amazon with the above details.
If the multiple exceeds the max_spot_price
, the bid price will be the max_spot_price
.
Either max_spot_price
or price_multiplier
, but not both, may be None.
If price_multiplier
is None, then no historical price information is retrieved; the bid price is simply the specified max_spot_price
.
If the max_spot_price
is None, then the multiple of the historical average spot prices is used as the bid price with no limit.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
-
class
buildbot.worker.libvirt.
LibVirtWorker
¶
libvirt is a virtualization API for interacting with the virtualization capabilities of recent versions of Linux and other OSes. It is LGPL and comes with a stable C API, and Python bindings.
This means we now have an API which when tied to buildbot allows us to have workers that run under Xen, QEMU, KVM, LXC, OpenVZ, User Mode Linux, VirtualBox and VMWare.
The libvirt code in Buildbot was developed against libvirt 0.7.5 on Ubuntu Lucid. It is used with KVM to test Python code on VMs, but obviously isn’t limited to that. Each build is run on a new VM, images are temporary and thrown away after each build.
This document will guide you through setup of a libvirt latent worker:
We won’t show you how to set up libvirt as it is quite different on each platform, but there are a few things you should keep in mind.
- If you are using the system libvirt (libvirt and buildbot master are on same server), your buildbot master user will need to be in the libvirtd group.
- If libvirt and buildbot master are on different servers, the user connecting to libvirt over ssh will need to be in the libvirtd group. Also need to setup authorization via ssh-keys (without password prompt).
- If you are using KVM, your buildbot master user will need to be in the KVM group.
- You need to think carefully about your virtual network first. Will NAT be enough? What IP will my VMs need to connect to for connecting to the master?
You need to create a base image for your builds that has everything needed to build your software. You need to configure the base image with a buildbot worker that is configured to connect to the master on boot.
Because this image may need updating a lot, we strongly suggest scripting its creation.
If you want to have multiple workers using the same base image it can be annoying to duplicate the image just to change the buildbot credentials. One option is to use libvirt’s DHCP server to allocate an identity to the worker: DHCP sets a hostname, and the worker takes its identity from that.
Doing all this is really beyond the scope of the manual, but there is a vmbuilder script and a network.xml file to create such a DHCP server in master/contrib/ (Contrib Scripts) that should get you started:
sudo apt-get install ubuntu-vm-builder
sudo contrib/libvirt/vmbuilder
Should create an ubuntu/
folder with a suitable image in it.
virsh net-define contrib/libvirt/network.xml
virsh net-start buildbot-network
Should set up a KVM compatible libvirt network for your buildbot VM’s to run on.
If you want to add a simple on demand VM to your setup, you only need the following.
We set the username to minion1
, the password to sekrit
.
The base image is called base_image
and a copy of it will be made for the duration of the VM’s life.
That copy will be thrown away every time a build is complete.
from buildbot.plugins import worker, util
c['workers'] = [
worker.LibVirtWorker('minion1', 'sekrit',
util.Connection("qemu:///session"),
'/home/buildbot/images/minion1',
'/home/buildbot/images/base_image')
]
You can use virt-manager to define minion1
with the correct hardware.
If you don’t, buildbot won’t be able to find a VM to start.
LibVirtWorker
accepts the following arguments:
name
- Both a buildbot username and the name of the virtual machine.
password
- A password for the buildbot to login to the master with.
connection
Connection
instance wrapping connection to libvirt.hd_image
- The path to a libvirt disk image, normally in qcow2 format when using KVM.
base_image
- If given a base image, buildbot will clone it every time it starts a VM. This means you always have a clean environment to do your build in.
xml
- If a VM isn’t predefined in virt-manager, then you can instead provide XML like that used with
virsh define
. The VM will be created automatically when needed, and destroyed when not needed any longer.
Note
The hd_image
and base_image
must be on same machine with buildbot master.
If you want to use libvirt on remote server configure remote libvirt server and buildbot server following way.
- Define user to connect to remote machine using ssh. Configure connection of such user to remote libvirt server (see https://wiki.libvirt.org/page/SSHSetup) without password prompt.
- Add user to libvirtd group on remote libvirt server
sudo usermod -G libvirtd -a <user>
.
Configure remote libvirt server:
- Create virtual machine for buildbot and configure it.
- Change virtual machine image file to new name, which will be used as temporary image and deleted after virtual machine stops. Execute command
sudo virsh edit <VM name>
. In xml file locatedevices/disk/source
and change file path to new name. The file must not be exists, it will create via hook script. - Add hook script to
/etc/libvirt/hooks/qemu
to recreate VM image each start:
#!/usr/bin/python
# Script /etc/libvirt/hooks/qemu
# Don't forget to execute service libvirt-bin restart
# Also see https://www.libvirt.org/hooks.html
# This script make clean VM for each start using base image
import os
import subprocess
import sys
images_path = '/var/lib/libvirt/images/'
# build-vm - VM name in virsh list --all
# vm_base_image.qcow2 - base image file name, must exist in path /var/lib/libvirt/images/
# vm_temp_image.qcow2 - temporary image. Must not exist in path /var/lib/libvirt/images/, but defined in VM config file
domains = {
'build-vm' : ['vm_base_image.qcow2', 'vm_temp_image.qcow2'],
}
def delete_image_clone(vir_domain):
if vir_domain in domains:
domain = domains[vir_domain]
os.remove(images_path + domain[1])
def create_image_clone(vir_domain):
if vir_domain in domains:
domain = domains[vir_domain]
cmd = ['/usr/bin/qemu-img', 'create', '-b', images_path + domain[0], '-f', 'qcow2', images_path + domain[1]]
subprocess.call(cmd)
if __name__ == "__main__":
vir_domain, action = sys.argv[1:3]
if action in ["prepare"]:
create_image_clone(vir_domain)
if action in ["release"]:
delete_image_clone(vir_domain)
Configure buildbot server:
- On buildbot server in virtual environment install libvirt-python package:
pip install libvirt-python
- Create worker using remote ssh connection.
from buildbot.plugins import worker, util
c['workers'] = [
worker.LibVirtWorker('minion1', 'sekrit',
util.Connection("qemu+ssh://<user>@<ip address or DNS name>:<port>/session"),
'/home/buildbot/images/minion1')
]
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
-
class
buildbot.worker.openstack.
OpenStackLatentWorker
¶
OpenStack is a series of interconnected components that facilitates managing compute, storage, and network resources in a data center. It is available under the Apache License and has a REST interface along with a Python client.
This document will guide you through setup of an OpenStack latent worker:
OpenStackLatentWorker requires python-novaclient to work, you can install it with pip install python-novaclient.
Setting up OpenStack is outside the domain of this document. There are four account details necessary for the Buildbot master to interact with your OpenStack cloud: username, password, a tenant name, and the auth URL to use.
OpenStack supports a large number of image formats. OpenStack maintains a short list of prebuilt images; if the desired image is not listed, The OpenStack Compute Administration Manual is a good resource for creating new images. You need to configure the image with a buildbot worker to connect to the master on boot.
With the configured image in hand, it is time to configure the buildbot master to create OpenStack instances of it. You will need the aforementioned account details. These are the same details set in either environment variables or passed as options to an OpenStack client.
OpenStackLatentWorker
accepts the following arguments:
name
- The worker name.
password
- A password for the worker to login to the master with.
flavor
- The flavor ID to use for the instance.
image
- A string containing the image UUID to use for the instance. A callable may instead be passed. It will be passed the list of available images and must return the image to use.
os_username
os_password
os_tenant_name
os_user_domain
os_project_domain
os_auth_url
- The OpenStack authentication needed to create and delete instances. These are the same as the environment variables with uppercase names of the arguments.
block_devices
A list of dictionaries. Each dictionary specifies a block device to set up during instance creation. The values support using properties from the build and will be rendered when the instance is started.
Supported keys
uuid
- (required): The image, snapshot, or volume UUID.
volume_size
- (optional): Size of the block device in GiB. If not specified, the minimum size in GiB to contain the source will be calculated and used.
device_name
- (optional): defaults to
vda
. The name of the device in the instance; e.g. vda or xda. source_type
- (optional): defaults to
image
. The origin of the block device. Valid values areimage
,snapshot
, orvolume
. destination_type
- (optional): defaults to
volume
. Destination of block device:volume
orlocal
. delete_on_termination
- (optional): defaults to
True
. Controls if the block device will be deleted when the instance terminates. boot_index
- (optional): defaults to
0
. Integer used for boot order.
meta
- A dictionary of string key-value pairs to pass to the instance.
These will be available under the
metadata
key from the metadata service. nova_args
- (optional) A dict that will be appended to the arguments when creating a VM. Buildbot uses the OpenStack Nova version 2 API by default (see client_version).
client_version
- (optional)
A string containing the Nova client version to use.
Defaults to
2
. Supports using2.X
, where X is a micro-version. Use1.1
for the previous, deprecated, version. If using1.1
, note that an older version of novaclient will be needed so it won’t switch to using2
. region
- (optional) A string specifying region where to instantiate the worker.
Here is the simplest example of configuring an OpenStack latent worker.
from buildbot.plugins import worker
c['workers'] = [
worker.OpenStackLatentWorker('bot2', 'sekrit',
flavor=1, image='8ac9d4a4-5e03-48b0-acde-77a0345a9ab1',
os_username='user', os_password='password',
os_tenant_name='tenant',
os_auth_url='http://127.0.0.1:35357/v2.0')
]
The image
argument also supports being given a callable.
The callable will be passed the list of available images and must return the image to use.
The invocation happens in a separate thread to prevent blocking the build master when interacting with OpenStack.
from buildbot.plugins import worker
def find_image(images):
# Sort oldest to newest.
def key_fn(x):
return x.created
candidate_images = sorted(images, key=key_fn)
# Return the oldest candidate image.
return candidate_images[0]
c['workers'] = [
worker.OpenStackLatentWorker('bot2', 'sekrit',
flavor=1, image=find_image,
os_username='user', os_password='password',
os_tenant_name='tenant',
os_auth_url='http://127.0.0.1:35357/v2.0')
]
The block_devices
argument is minimally manipulated to provide some defaults and passed directly to novaclient.
The simplest example is an image that is converted to a volume and the instance boots from that volume.
When the instance is destroyed, the volume will be terminated as well.
from buildbot.plugins import worker
c['workers'] = [
worker.OpenStackLatentWorker('bot2', 'sekrit',
flavor=1, image='8ac9d4a4-5e03-48b0-acde-77a0345a9ab1',
os_username='user', os_password='password',
os_tenant_name='tenant',
os_auth_url='http://127.0.0.1:35357/v2.0',
block_devices=[
{'uuid': '3f0b8868-67e7-4a5b-b685-2824709bd486',
'volume_size': 10}])
]
The nova_args
can be used to specify additional arguments for the novaclient.
For example network mappings, which is required if your OpenStack tenancy has more than one network, and default cannot be determined.
Please refer to your OpenStack manual whether it wants net-id or net-name.
Other useful parameters are availability_zone
, security_groups
and config_drive
.
Refer to Python bindings to the OpenStack Nova API for more information.
It is found on section Servers, method create.
from buildbot.plugins import worker
c['workers'] = [
worker.OpenStackLatentWorker('bot2', 'sekrit',
flavor=1, image='8ac9d4a4-5e03-48b0-acde-77a0345a9ab1',
os_username='user', os_password='password',
os_tenant_name='tenant',
os_auth_url='http://127.0.0.1:35357/v2.0',
nova_args={
'nics': [
{'net-id':'uid-of-network'}
]})
]
OpenStackLatentWorker
supports all other configuration from the standard Worker
.
The missing_timeout
and notify_on_missing
specify how long to wait for an OpenStack instance to attach before considering the attempt to have failed and email addresses to alert, respectively.
missing_timeout
defaults to 20 minutes.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
-
class
buildbot.worker.docker.
DockerLatentWorker
¶
-
class
buildbot.plugins.worker.
DockerLatentWorker
¶
Docker is an open-source project that automates the deployment of applications inside software containers.
The DockerLatentWorker
attempts to instantiate a fresh image for each build to assure consistency of the environment between builds.
Each image will be discarded once the worker finished processing the build queue (i.e. becomes idle
).
See build_wait_timeout to change this behavior.
This document will guide you through the setup of such workers.
An easy way to try Docker is through installation of dedicated Virtual machines. Two of them stands out:
Beside, it is always possible to install Docker next to the buildmaster. Beware that in this case, overall performance will depend on how many builds the computer where you have your buildmaster can handle as everything will happen on the same one.
Note
It is not necessary to install Docker in the same environment as your master as we will make use to the Docker API through docker-py. More in master setup.
CoreOS is targeted at building infrastructure and distributed systems.
In order to get the latent worker working with CoreOS, it is necessary to expose the docker socket outside of the Virtual Machine.
If you installed it via Vagrant, it is also necessary to uncomment the following line in your config.rb
file:
$expose_docker_tcp=2375
The following command should allow you to confirm that your Docker socket is now available via the network:
docker -H tcp://127.0.0.1:2375 ps
boot2docker is one of the fastest ways to boot to Docker. As it is meant to be used from outside of the Virtual Machine, the socket is already exposed. Please follow the installation instructions on how to find the address of your socket.
Our build master will need the name of an image to perform its builds. Each time a new build will be requested, the same base image will be used again and again, actually discarding the result of the previous build. If you need some persistent storage between builds, you can use Volumes.
Each Docker image has a single purpose. Our worker image will be running a buildbot worker.
Docker uses Dockerfile
s to describe the steps necessary to build an image.
The following example will build a minimal worker.
This example is voluntarily simplistic, and should probably not be used in production, see next paragraph.
1 2 3 4 5 6 7 8 9 10 11 12 13 | FROM debian:stable
RUN apt-get update && apt-get install -y \
python-dev \
python-pip
RUN pip install buildbot-worker
RUN groupadd -r buildbot && useradd -r -g buildbot buildbot
RUN mkdir /worker && chown buildbot:buildbot /worker
# Install your build-dependencies here ...
USER buildbot
WORKDIR /worker
RUN buildbot-worker create-worker . <master-hostname> <workername> <workerpassword>
ENTRYPOINT ["/usr/local/bin/buildbot-worker"]
CMD ["start", "--nodaemon"]
|
On line 11, the hostname for your master instance, as well as the worker name and password is setup. Don’t forget to replace those values with some valid ones for your project.
It is a good practice to set the ENTRYPOINT
to the worker executable, and the CMD
to ["start", "--nodaemon"]
.
This way, no parameter will be required when starting the image.
When your Dockerfile is ready, you can build your first image using the following command (replace myworkername with a relevant name for your case):
docker build -t myworkername - < Dockerfile
Previous simple example hardcodes the worker name into the dockerfile, which will not work if you want to share your docker image between workers.
You can find in buildbot source code in master/contrib/docker one example configurations:
- pythonnode_worker
- a worker with Python and node installed, which demonstrate how to reuse the base worker to create variations of build environments.
It is based on the official
buildbot/buildbot-worker
image.
The master setups several environment variables before starting the workers:
BUILDMASTER
- The address of the master the worker shall connect to
BUILDMASTER_PORT
- The port of the master’s worker ‘pb’ protocol.
WORKERNAME
- The name the worker should use to connect to master
WORKERPASS
- The password the worker should use to connect to master
We will rely on docker-py to connect our master with docker. Now is the time to install it in your master environment.
Before adding the worker to your master configuration, it is possible to validate the previous steps by starting the newly created image interactively. To do this, enter the following lines in a Python prompt where docker-py is installed:
>>> import docker
>>> docker_socket = 'tcp://localhost:2375'
>>> client = docker.client.Client(base_url=docker_socket)
>>> worker_image = 'my_project_worker'
>>> container = client.create_container(worker_image)
>>> client.start(container['Id'])
>>> # Optionally examine the logs of the master
>>> client.stop(container['Id'])
>>> client.wait(container['Id'])
0
It is now time to add the new worker to the master configuration under workers
.
The following example will add a Docker latent worker for docker running at the following address: tcp://localhost:2375
, the worker name will be docker
, its password: password
, and the base image name will be my_project_worker
:
from buildbot.plugins import worker
c['workers'] = [
worker.DockerLatentWorker('docker', 'password',
docker_host='tcp://localhost:2375',
image='my_project_worker')
]
password
- (mandatory)
The worker password part of the Latent Workers API.
If the password is
None
, then it will be automatically generated from random number, and transmitted to the container via environment variable.
In addition to the arguments available for any Latent Workers, DockerLatentWorker
will accept the following extra ones:
docker_host
- (mandatory) This is the address the master will use to connect with a running Docker instance.
image
This is the name of the image that will be started by the build master. It should start a worker. This option can be a renderable, like Interpolate, so that it generates from the build request properties.
command
- (optional) This will override the command setup during image creation.
volumes
- (optional) See Setting up Volumes
dockerfile
(optional if
image
is given) This is the content of the Dockerfile that will be used to build the specified image if the image is not found by Docker. It should be a multiline string.Note
In case
image
anddockerfile
are given, no attempt is made to compare the image with the content of the Dockerfile parameter if the image is found.version
- (optional, default to the highest version known by docker-py) This will indicates which API version must be used to communicate with Docker.
tls
- (optional)
This allow to use TLS when connecting with the Docker socket.
This should be a
docker.tls.TLSConfig
object. See docker-py’s own documentation for more details on how to initialise this object. followStartupLogs
- (optional, defaults to false) This transfers docker container’s log inside master logs during worker startup (before connection). This can be useful to debug worker startup. e.g network issues, etc.
masterFQDN
- (optional, defaults to socket.getfqdn())
Address of the master the worker should connect to. Use if you master machine does not have proper fqdn.
This value is passed to the docker image via environment variable
BUILDMASTER
hostconfig
- (optional) Extra host configuration parameters passed as a dictionary used to create HostConfig object. See docker-py’s HostConfig documentation for all the supported options.
autopull
- (optional, defaults to false) Automatically pulls image if requested image is not on docker host.
alwaysPull
- (optional, defaults to false) Always pulls image if autopull is set to true.
custom_context
- (optional) Boolean indicating that the user wants to use custom build arguments for the docker environment. Defaults to False.
encoding
- (optional) String indicating the compression format for the build context. defaults to ‘gzip’, but ‘bzip’ can be used as well.
buildargs
- (optional if
custom_context
is True) Dictionary, passes information for the docker to build its environment. Eg. {‘DISTRO’:’ubuntu’, ‘RELEASE’:‘11.11’}. Defaults to None.
The volume
parameter allows to share directory between containers, or between a container and the host system.
Refer to Docker documentation for more information about Volumes.
The format of that variable has to be an array of string.
Each string specify a volume in the following format: volumename:bindname
.
The volume name has to be appended with :ro
if the volume should be mounted read-only.
Note
This is the same format as when specifying volumes on the command line for docker’s own -v
option.
Marathon Marathon is a production-grade container orchestration platform for Mesosphere’s Data-center Operating System (DC/OS) and Apache Mesos
.
Buildbot supports using Marathon to host your latent workers.
It requires either txrequests or treq to be installed to allow interaction with http server.
See HTTPClientService
for details.
-
class
buildbot.worker.marathon.
MarathonLatentWorker
¶
-
class
buildbot.plugins.worker.
MarathonLatentWorker
¶
The MarathonLatentWorker
attempts to instantiate a fresh image for each build to assure consistency of the environment between builds.
Each image will be discarded once the worker finished processing the build queue (i.e. becomes idle
).
See build_wait_timeout to change this behavior.
In addition to the arguments available for any Latent Workers, MarathonLatentWorker
will accept the following extra ones:
marathon_url
- (mandatory) This is the URL to Marathon server. Its REST API will be used to start docker containers.
marathon_auth
- (optional)
This is the optional
('userid', 'password')
BasicAuth
credential. If txrequests is installed, this can be a requests authentication plugin. image
- (mandatory) This is the name of the image that will be started by the build master. It should start a worker. This option can be a renderable, like Interpolate, so that it generates from the build request properties. Images are by pulled from the default docker registry. MarathonLatentWorker does not support starting a worker built from a Dockerfile.
masterFQDN
(optional, defaults to socket.getfqdn()) Address of the master the worker should connect to. Use if you master machine does not have proper fqdn. This value is passed to the docker image via environment variable
BUILDMASTER
If the value contains a colon (
:
), then BUILDMASTER and BUILDMASTER_PORT environment variables will be passed, following scheme:masterFQDN="$BUILDMASTER:$BUILDMASTER_PORT"
marathon_extra_config
- (optional, defaults to
{}`
) Extra configuration to be passed to Marathon API. This implementation will setup the minimal configuration to run a worker (docker image,BRIDGED
network) It will let the default for everything else, including memory size, volume mounting, etc. This configuration is voluntarily very raw so that it is easy to use new marathon features. This dictionary will be merged into the Buildbot generated config, and recursively override it. See Marathon API documentation to learn what to include in this config.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
Buildbot supports using Kubernetes to host your latent workers.
-
class
buildbot.worker.kubernetes.
KubeLatentWorker
¶
-
class
buildbot.plugins.worker.
KubeLatentWorker
¶
The KubeLatentWorker
attempts to instantiate a fresh container for each build to assure consistency of the environment between builds
Each container will be discarded once the worker finished processing the build queue (i.e. becomes idle
).
See build_wait_timeout to change this behavior.
In addition to the arguments available for any Latent Workers, KubeLatentWorker
will accept the following extra ones:
image
- (optional, default to
buildbot/buildbot-worker
) Docker image. Default to the official buildbot image. namespace
- (optional) This is the name of the namespace. Default to the current namespace
kube_config
(mandatory) This is the object specifying how to connect to the kubernetes cluster. This object must be an instance of abstract class
KubeConfigLoaderBase
, which have 3 implementations:KubeHardcodedConfig
KubeCtlProxyConfigLoader
KubeInClusterConfigLoader
masterFQDN
- (optional, default to
None
) Address of the master the worker should connect to. Put the service master service name if you want to place a load-balancer between the workers and the masters. The default behaviour is to compute address IP of the master. This option works out-of-the box inside kubernetes but don’t leverage the load-balancing through service. You can pass any callable, such asKubeLatentWorker.get_fqdn
that will setmasterFQDN=socket.getfqdn()
.
For more customization, you can subclass KubeLatentWorker
and override following methods.
All those methods can optionally return a deferred.
All those methods take props object which is a L{IProperties} allowing to get some parameters from the build properties
createEnvironment
(self, props)¶This method compute the environment from your properties. Don’t forget to first call super().createEnvironment(props) to get the base properties necessary to connect to the master.
getBuildContainerResources
(self, props)¶This method compute the pod resources part of the container spec (spec.containers[].resources. This is important to reserve some CPU and memory for your builds, and to trigger node auto-scaling if needed. You can also limit the CPU and memory for your container.
getServicesContainers
(self, props)¶This method compute a list of containers spec to put alongside the worker container. This is useful for starting services around your build pod, like a database container. All containers within the same pod share the same localhost interface, so you can access the other containers TCP ports very easily.
Kubernetes provides many options to connect to a cluster. It is especially more complicated as some cloud providers use specific methods to connect to their managed kubernetes. Config loaders objects can be shared between LatentWorker.
There are three options you may use to connect to your clusters.
When running both the master and slaves run on the same Kubernetes cluster, you should use the KubeInClusterConfigLoader.
If not, but having a configured kubectl
tool available to the build master is an option for you, you should use KubeCtlProxyConfigLoader.
If neither of these options is convenient, use KubeHardcodedConfig.
-
class
buildbot.util.kubeclientservice.
KubeCtlProxyConfigLoader
¶
-
class
buildbot.plugins.util.
KubeCtlProxyConfigLoader
¶
KubeCtlProxyConfigLoader
¶With KubeCtlProxyConfigLoader
, buildbot will user kubectl proxy
to get access to the cluster.
This delegates the authentication to the kubectl
golang
binary, and thus avoid to implement a python version for every authentication scheme that kubernetes provides.
kubectl
must be available in the PATH
, and configured to be able to start pods.
While this method is very convenient and easy, it also opens an unauthenticated http access to your cluster via localhost.
You must ensure that this is properly secured, and your buildbot master machine is not on a shared multi-user server.
proxy_port
- (optional defaults to 8001) HTTP port to use.
namespace
- (optional defaults to
"default"
default namespace to use if the latent worker do not provide one already.
-
class
buildbot.util.kubeclientservice.
KubeHardcodedConfig
¶
-
class
buildbot.plugins.util.
KubeHardcodedConfig
¶
KubeHardcodedConfig
¶With KubeHardcodedConfig
, you just configure the necessary parameters to connect to the clusters.
master_url
- (mandatory) The http url of you kubernetes master. Only http and https protocols are supported
headers
- (optional) Additional headers to be passed to the HTTP request
basicAuth
(optional) Basic authorization info to connect to the cluster, as a {‘user’: ‘username’, ‘password’: ‘psw’ } dict.
Unlike the headers argument, this argument supports secret providers, e.g:
basicAuth={'user': 'username', 'password': Secret('k8spassword')}
bearerToken
(optional)
A bearer token to authenticate to the cluster, as a string. Unlike the headers argument, this argument supports secret providers, e.g:
bearerToken=Secret('k8s-token')
When using the Google Kubernetes Engine (GKE), a bearer token for the default service account can be had with:
gcloud container clusters get-credentials --region [YOURREGION] YOURCLUSTER kubectl describe sa kubectl describe secret [SECRET_ID]
Where SECRET_ID is displayed by the
describe sa
command line. The default service account does not have rights on the cluster (to create/delete pods), which is required by BuildBot’s integration. You may give it this right by making it a cluster admin withkubectl create clusterrolebinding service-account-admin \ --clusterrole=cluster-admin \ --serviceaccount default:default
cert
(optional) Client certificate and key to use to authenticate. This only works if
txrequests
is installed:cert=('/path/to/certificate.crt', '/path/to/certificate.key')
verify
(optional) Path to server certificate authenticate the server:
verify='/path/to/kube_server_certificate.crt'
When using the Google Kubernetes Engine (GKE), this certificate is available from the admin console, on the Cluster page. Verify that it is valid (i.e. no copy/paste errors) with
openssl verify PATH_TO_PEM
.namespace
- (optional defaults to
"default"
default namespace to use if the latent worker do not provide one already.
-
class
buildbot.util.kubeclientservice.
KubeInClusterConfigLoader
¶
-
class
buildbot.plugins.util.
KubeInClusterConfigLoader
¶
KubeInClusterConfigLoader
¶Use KubeInClusterConfigLoader
, if your Buildbot master is itself located within the kubernetes cluster.
In this case, you would associated a service account to the Buildbot master pod, and KubeInClusterConfigLoader
will get the credentials from that.
This config loader takes no arguments.
Dangers with Latent Workers¶
Any latent worker that interacts with a for-fee service, such as the EC2LatentWorker
, brings significant risks.
As already identified, the configuration will need access to account information that, if obtained by a criminal, can be used to charge services to your account.
Also, bugs in the Buildbot software may lead to unnecessary charges.
In particular, if the master neglects to shut down an instance for some reason, a virtual machine may be running unnecessarily, charging against your account.
Manual and/or automatic (e.g. Nagios with a plugin using a library like boto) double-checking may be appropriate.
A comparatively trivial note is that currently if two instances try to attach to the same latent worker, it is likely that the system will become confused. This should not occur, unless, for instance, you configure a normal worker to connect with the authentication of a latent buildbot. If this situation does occurs, stop all attached instances and restart the master.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.5.7. Builder Configuration¶
The builders
configuration key is a list of objects giving configuration for the Builders.
For more information on the function of Builders in Buildbot, see the Concepts chapter.
The class definition for the builder configuration is in buildbot.config
.
However there is a much simpler way to use it, so in the configuration file, its use looks like:
from buildbot.plugins import util
c['builders'] = [
util.BuilderConfig(name='quick', workernames=['bot1', 'bot2'], factory=f_quick),
util.BuilderConfig(name='thorough', workername='bot1', factory=f_thorough),
]
BuilderConfig
takes the following keyword arguments:
name
- This specifies the Builder’s name, which is used in status reports.
workername
workernames
- These arguments specify the worker or workers that will be used by this Builder.
All workers names must appear in the
workers
configuration parameter. Each worker can accommodate multiple builders. Theworkernames
parameter can be a list of names, whileworkername
can specify only one worker. factory
- This is a
buildbot.process.factory.BuildFactory
instance which controls how the build is performed by defining the steps in the build. Full details appear in their own section, Build Factories.
Other optional keys may be set on each BuilderConfig
:
builddir
- Specifies the name of a subdirectory of the master’s basedir in which everything related to this builder will be stored. This holds build status information. If not set, this parameter defaults to the builder name, with some characters escaped. Each builder must have a unique build directory.
workerbuilddir
- Specifies the name of a subdirectory (under the worker’s configured base directory) in which everything related to this builder will be placed on the worker.
This is where checkouts, compiles, and tests are run.
If not set, defaults to
builddir
. If a worker is connected to multiple builders that share the sameworkerbuilddir
, make sure the worker is set to run one build at a time or ensure this is fine to run multiple builds from the same directory simultaneously. tags
- If provided, this is a list of strings that identifies tags for the builder.
Status clients can limit themselves to a subset of the available tags.
A common use for this is to add new builders to your setup (for a new module, or for a new worker) that do not work correctly yet and allow you to integrate them with the active builders.
You can tag these new builders with a
test
tag, make your main status clients ignore them, and have only private status clients pick them up. As soon as they work, you can move them over to the active tag. nextWorker
- If provided, this is a function that controls which worker will be assigned future jobs.
The function is passed three arguments, the
Builder
object which is assigning a new job, a list ofWorkerForBuilder
objects and theBuildRequest
. The function should return one of theWorkerForBuilder
objects, orNone
if none of the available workers should be used. As an example, for eachworker
in the list,worker.worker
will be aWorker
object, andworker.worker.workername
is the worker’s name. The function can optionally return a Deferred, which should fire with the same results. nextBuild
- If provided, this is a function that controls which build request will be handled next.
The function is passed two arguments, the
Builder
object which is assigning a new job, and a list ofBuildRequest
objects of pending builds. The function should return one of theBuildRequest
objects, orNone
if none of the pending builds should be started. This function can optionally return a Deferred which should fire with the same results. canStartBuild
If provided, this is a function that can veto whether a particular worker should be used for a given build request. The function is passed three arguments: the
Builder
, aWorker
, and aBuildRequest
. The function should returnTrue
if the combination is acceptable, orFalse
otherwise. This function can optionally return a Deferred which should fire with the same results.See canStartBuild Functions for a concrete example.
locks
- A list of
Locks
(instances ofbuildbot.locks.WorkerLock
orbuildbot.locks.MasterLock
) that should be acquired before starting aBuild
from thisBuilder
. Alternatively this could be a renderable that returns this list depending on properties of related to a build that is just about to be created. This lets you defer picking the locks to acquire until it is known whichWorker
a build would get assigned to. The properties available to the renderable include all properties that are set to the build before its first step excluding the properties that come from the build itself and thebuilddir
property that comes from worker. TheLocks
will be released when the build is complete. Note that this is a list of actualLock
instances, not names. Also note that all Locks must have unique names. See Interlocks. env
A Builder may be given a dictionary of environment variables in this parameter. The variables are used in
ShellCommand
steps in builds created by this builder. The environment variables will override anything in the worker’s environment. Variables passed directly to aShellCommand
will override variables of the same name passed to the Builder.For example, if you have a pool of identical workers it is often easier to manage variables like
PATH
from Buildbot rather than manually editing it inside of the workers’ environment.f = factory.BuildFactory f.addStep(ShellCommand( command=['bash', './configure'])) f.addStep(Compile()) c['builders'] = [ BuilderConfig(name='test', factory=f, workernames=['worker1', 'worker2', 'worker3', 'worker4'], env={'PATH': '/opt/local/bin:/opt/app/bin:/usr/local/bin:/usr/bin'}), ]
Unlike most builder configuration arguments, this argument can contain renderables.
collapseRequests
- Specifies how build requests for this builder should be collapsed. See Collapsing Build Requests, below.
properties
- A builder may be given a dictionary of Build Properties specific for this builder in this parameter. Those values can be used later on like other properties. Interpolate.
defaultProperties
- Similar to the
properties
parameter. ButdefaultProperties
will only be added to Build Properties if they are not already set by another source. description
- A builder may be given an arbitrary description, which will show up in the web status on the builder’s page.
2.5.7.1. Collapsing Build Requests¶
When more than one build request is available for a builder, Buildbot can “collapse” the requests into a single build. This is desirable when build requests arrive more quickly than the available workers can satisfy them, but has the drawback that separate results for each build are not available.
Requests are only candidated for a merge if both requests have exactly the same codebases.
This behavior can be controlled globally, using the collapseRequests
parameter, and on a per-Builder
basis, using the collapseRequests
argument to the Builder
configuration.
If collapseRequests
is given, it completely overrides the global configuration.
Possible values for both collapseRequests
configurations are:
True
- Requests will be collapsed if their sourcestamp are compatible (see below for definition of compatible).
False
- Requests will never be collapsed.
callable(builder, req1, req2)
- Requests will be collapsed if the callable returns true. See Collapse Request Functions for detailed example.
Sourcestamps are compatible if all of the below conditions are met:
- Their codebase, branch, project, and repository attributes match exactly
- Neither source stamp has a patch (e.g., from a try scheduler)
- Either both source stamps are associated with changes, or neither are associated with changes but they have matching revisions.
2.5.7.2. Prioritizing Builds¶
The BuilderConfig
parameter nextBuild
can be use to prioritize build requests within a builder.
Note that this is orthogonal to Prioritizing Builders, which controls the order in which builders are called on to start their builds.
The details of writing such a function are in Build Priority Functions.
Such a function can be provided to the BuilderConfig as follows:
def pickNextBuild(builder, requests):
...
c['builders'] = [
BuilderConfig(name='test', factory=f,
nextBuild=pickNextBuild,
workernames=['worker1', 'worker2', 'worker3', 'worker4']),
]
2.5.7.3. Virtual Builders¶
Dynamic Trigger is a method which allows to trigger the same builder, with different parameters. This method is used by frameworks which store the build config along side the source code like Buildbot_travis. The drawback of this method is that it is difficult to extract statistics for similar builds. The standard dashboards are not working well due to the fact that all the builds are on the same builder.
In order to overcome those drawbacks, Buildbot has the concept of virtual builder.
If a build has the property virtual_builder_name
, it will automatically attach to that builder instead of the original builder.
That created virtual builder is not attached to any master and is only used for better sorting in the UI and better statistics.
The original builder and worker configuration is still used for all other build behaviors.
The virtual builder metadata is configured with the following properties:
virtual_builder_name
: The name of the virtual builder.virtual_builder_description
: The description of the virtual builder.virtual_builder_tags
: The tags for the virtual builder.
You can also use virtual builders with SingleBranchScheduler
.
For example if you want to automatically build all branches in your project without having to manually create a new builder each time one is added:
c['schedulers'].append(schedulers.SingleBranchScheduler(
name='myproject-epics',
change_filter=util.ChangeFilter(branch_re='epics/.*'),
builderNames=['myproject-epics'],
properties={
'virtual_builder_name': util.Interpolate("myproject-%(ss::branch)s")
}
))
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.5.8. Build Factories¶
Each Builder is equipped with a build factory
, which defines the steps used to perform that particular type of build.
This factory is created in the configuration file, and attached to a Builder through the factory
element of its dictionary.
The steps used by these builds are defined in the next section, Build Steps.
Note
Build factories are used with builders, and are not added directly to the buildmaster configuration dictionary.
2.5.8.1. Defining a Build Factory¶
A BuildFactory
defines the steps that every build will follow.
Think of it as a glorified script.
For example, a build factory which consists of an SVN checkout followed by a make build
would be configured as follows:
from buildbot.plugins import util, steps
f = util.BuildFactory()
f.addStep(steps.SVN(repourl="http://..", mode="incremental"))
f.addStep(steps.Compile(command=["make", "build"]))
This factory would then be attached to one builder (or several, if desired):
c['builders'].append(
BuilderConfig(name='quick', workernames=['bot1', 'bot2'], factory=f))
It is also possible to pass a list of steps into the BuildFactory
when it is created.
Using addStep
is usually simpler, but there are cases where it is more convenient to create the list of steps ahead of time, perhaps using some Python tricks to generate the steps.
from buildbot.plugins import steps, util
all_steps = [
steps.CVS(cvsroot=CVSROOT, cvsmodule="project", mode="update"),
steps.Compile(command=["make", "build"]),
]
f = util.BuildFactory(all_steps)
Finally, you can also add a sequence of steps all at once:
f.addSteps(all_steps)
Attributes¶
The following attributes can be set on a build factory after it is created, e.g.,
f = util.BuildFactory()
f.useProgress = False
useProgress
- (defaults to
True
): ifTrue
, the buildmaster keeps track of how long each step takes, so it can provide estimates of how long future builds will take. If builds are not expected to take a consistent amount of time (such as incremental builds in which a random set of files are recompiled or tested each time), this should be set toFalse
to inhibit progress-tracking. workdir
(defaults to ‘build’): workdir given to every build step created by this factory as default. The workdir can be overridden in a build step definition.
If this attribute is set to a string, that string will be used for constructing the workdir (worker base + builder builddir + workdir). The attribute can also be a Python callable, for more complex cases, as described in Factory Workdir Functions.
2.5.8.2. Dynamic Build Factories¶
In some cases you may not know what commands to run until after you checkout the source tree. For those cases you can dynamically add steps during a build from other steps.
The Build
object provides 2 functions to do this:
addStepsAfterCurrentStep(self, step_factories)
- This adds the steps after the step that is currently executing.
addStepsAfterLastStep(self, step_factories)
- This adds the steps onto the end of the build.
Both functions only accept as an argument a list of steps to add to the build.
For example lets say you have a script checked in into your source tree called build.sh.
When this script is called with the argument --list-stages
it outputs a newline separated list of stage names.
This can be used to generate at runtime a step for each stage in the build.
Each stage is then run in this example using ./build.sh --run-stage <stage name>
.
from buildbot.plugins import util, steps
from buildbot.process import buildstep, logobserver
from twisted.internet import defer
class GenerateStagesCommand(buildstep.ShellMixin, steps.BuildStep):
def __init__(self, **kwargs):
kwargs = self.setupShellMixin(kwargs)
super().__init__(**kwargs)
self.observer = logobserver.BufferLogObserver()
self.addLogObserver('stdio', self.observer)
def extract_stages(self, stdout):
stages = []
for line in stdout.split('\n'):
stage = str(line.strip())
if stage:
stages.append(stage)
return stages
@defer.inlineCallbacks
def run(self):
# run './build.sh --list-stages' to generate the list of stages
cmd = yield self.makeRemoteShellCommand()
yield self.runCommand(cmd)
# if the command passes extract the list of stages
result = cmd.results()
if result == util.SUCCESS:
# create a ShellCommand for each stage and add them to the build
self.build.addStepsAfterCurrentStep([
steps.ShellCommand(name=stage, command=["./build.sh", "--run-stage", stage])
for stage in self.extract_stages(self.observer.getStdout())
])
return result
f = util.BuildFactory()
f.addStep(steps.Git(repourl=repourl))
f.addStep(GenerateStagesCommand(
name="Generate build stages",
command=["./build.sh", "--list-stages"],
haltOnFailure=True))
2.5.8.3. Predefined Build Factories¶
Buildbot includes a few predefined build factories that perform common build sequences. In practice, these are rarely used, as every site has slightly different requirements, but the source for these factories may provide examples for implementation of those requirements.
GNUAutoconf¶
-
class
buildbot.process.factory.
GNUAutoconf
¶
GNU Autoconf is a software portability tool, intended to make it possible to write programs in C (and other languages) which will run on a variety of UNIX-like systems. Most GNU software is built using autoconf. It is frequently used in combination with GNU automake. These tools both encourage a build process which usually looks like this:
% CONFIG_ENV=foo ./configure --with-flags
% make all
% make check
# make install
(except of course the Buildbot always skips the make install
part).
The Buildbot’s buildbot.process.factory.GNUAutoconf
factory is designed to build projects which use GNU autoconf and/or automake.
The configuration environment variables, the configure flags, and command lines used for the compile and test are all configurable, in general the default values will be suitable.
Example:
f = util.GNUAutoconf(source=source.SVN(repourl=URL, mode="copy"),
flags=["--disable-nls"])
Required Arguments:
source
- This argument must be a step specification tuple that provides a BuildStep to generate the source tree.
Optional Arguments:
configure
- The command used to configure the tree. Defaults to ./configure. Accepts either a string or a list of shell argv elements.
configureEnv
- The environment used for the initial configuration step.
This accepts a dictionary which will be merged into the worker’s normal environment.
This is commonly used to provide things like
CFLAGS="-O2 -g"
(to turn off debug symbols during the compile). Defaults to an empty dictionary. configureFlags
- A list of flags to be appended to the argument list of the configure command.
This is commonly used to enable or disable specific features of the autoconf-controlled package, like
["--without-x"]
to disable windowing support. Defaults to an empty list. reconf
- use autoreconf to generate the ./configure file, set to True to use a buildbot default autoreconf command, or define the command for the ShellCommand.
compile
- this is a shell command or list of argv values which is used to actually compile the tree.
It defaults to
make all
. If set toNone
, the compile step is skipped. test
- this is a shell command or list of argv values which is used to run the tree’s self-tests.
It defaults to
make check
. If set to None, the test step is skipped. distcheck
- this is a shell command or list of argv values which is used to run the packaging test.
It defaults to
make distcheck
. If set to None, the test step is skipped.
BasicBuildFactory¶
-
class
buildbot.process.factory.
BasicBuildFactory
¶
This is a subclass of GNUAutoconf
which assumes the source is in CVS, and uses mode='full'
and method='clobber'
to always build from a clean working copy.
BasicSVN¶
-
class
buildbot.process.factory.
BasicSVN
¶
This class is similar to QuickBuildFactory
, but uses SVN instead of CVS.
QuickBuildFactory¶
-
class
buildbot.process.factory.
QuickBuildFactory
¶
The QuickBuildFactory
class is a subclass of GNUAutoconf
which assumes the source is in CVS, and uses mode='incremental'
to get incremental updates.
The difference between a full build and a quick build is that quick builds are generally done incrementally, starting with the tree where the previous build was performed.
That simply means that the source-checkout step should be given a mode='incremental'
flag, to do the source update in-place.
In addition to that, this class sets the useProgress
flag to False
.
Incremental builds will (or at least the ought to) compile as few files as necessary, so they will take an unpredictable amount of time to run.
Therefore it would be misleading to claim to predict how long the build will take.
This class is probably not of use to new projects.
CPAN¶
-
class
buildbot.process.factory.
CPAN
¶
Most Perl modules available from the CPAN archive use the MakeMaker
module to provide configuration, build, and test services.
The standard build routine for these modules looks like:
% perl Makefile.PL
% make
% make test
# make install
(except again Buildbot skips the install step)
Buildbot provides a CPAN
factory to compile and test these projects.
Arguments:
source
- (required): A step specification tuple, like that used by
GNUAutoconf
. perl
- A string which specifies the perl executable to use. Defaults to just perl.
Distutils¶
-
class
buildbot.process.factory.
Distutils
¶
Most Python modules use the distutils
package to provide configuration and build services.
The standard build process looks like:
% python ./setup.py build
% python ./setup.py install
Unfortunately, although Python provides a standard unit-test framework named unittest
, to the best of my knowledge distutils
does not provide a standardized target to run such unit tests.
(Please let me know if I’m wrong, and I will update this factory.)
The Distutils
factory provides support for running the build part of this process.
It accepts the same source=
parameter as the other build factories.
Arguments:
source
- (required): A step specification tuple, like that used by
GNUAutoconf
. python
- A string which specifies the python executable to use. Defaults to just python.
test
- Provides a shell command which runs unit tests.
This accepts either a string or a list.
The default value is
None
, which disables the test step (since there is no common default command to run unit tests in distutils modules).
Trial¶
-
class
buildbot.process.factory.
Trial
¶
Twisted provides a unit test tool named trial which provides a few improvements over Python’s built-in unittest
module.
Many Python projects which use Twisted for their networking or application services also use trial for their unit tests.
These modules are usually built and tested with something like the following:
% python ./setup.py build
% PYTHONPATH=build/lib.linux-i686-2.3 trial -v PROJECTNAME.test
% python ./setup.py install
Unfortunately, the build/lib
directory into which the built/copied .py
files are placed is actually architecture-dependent, and I do not yet know of a simple way to calculate its value.
For many projects it is sufficient to import their libraries in place from the tree’s base directory (PYTHONPATH=.
).
In addition, the PROJECTNAME
value where the test files are located is project-dependent: it is usually just the project’s top-level library directory, as common practice suggests the unit test files are put in the test
sub-module.
This value cannot be guessed, the Trial
class must be told where to find the test files.
The Trial
class provides support for building and testing projects which use distutils and trial.
If the test module name is specified, trial will be invoked.
The library path used for testing can also be set.
One advantage of trial is that the Buildbot happens to know how to parse trial output, letting it identify which tests passed and which ones failed. The Buildbot can then provide fine-grained reports about how many tests have failed, when individual tests fail when they had been passing previously, etc.
Another feature of trial is that you can give it a series of source .py
files, and it will search them for special test-case-name
tags that indicate which test cases provide coverage for that file.
Trial can then run just the appropriate tests.
This is useful for quick builds, where you want to only run the test cases that cover the changed functionality.
Arguments:
testpath
- Provides a directory to add to
PYTHONPATH
when running the unit tests, if tests are being run. Defaults to.
to include the project files in-place. The generated build library is frequently architecture-dependent, but may simply bebuild/lib
for pure-Python modules. python
- which Python executable to use.
This list will form the start of the argv array that will launch trial.
If you use this, you should set
trial
to an explicit path (like/usr/bin/trial
or./bin/trial
). The parameter defaults toNone
, which leaves it out entirely (runningtrial args
instead ofpython ./bin/trial args
). Likely values are['python']
,['python2.2']
, or['python', '-Wall']
. trial
- provides the name of the trial command. It is occasionally useful to use an alternate executable, such as trial2.2 which might run the tests under an older version of Python. Defaults to trial.
trialMode
- a list of arguments to pass to trial, specifically to set the reporting mode.
This defaults to
['--reporter=bwverbose']
, which only works for Twisted-2.1.0 and later. trialArgs
- a list of arguments to pass to trial, available to turn on any extra flags you like.
Defaults to
[]
. tests
- Provides a module name or names which contain the unit tests for this project.
Accepts a string, typically
PROJECTNAME.test
, or a list of strings. Defaults toNone
, indicating that no tests should be run. You must either set this ortestChanges
. testChanges
- if
True
, ignore thetests
parameter and instead ask the Build for all the files that make up the Changes going into this build. Pass these filenames to trial and ask it to look for test-case-name tags, running just the tests necessary to cover the changes. recurse
- If
True
, tells Trial (with the--recurse
argument) to look in all subdirectories for additional test cases. reactor
- which reactor to use, like ‘gtk’ or ‘java’. If not provided, the Twisted’s usual platform-dependent default is used.
randomly
- If
True
, tells Trial (with the--random=0
argument) to run the test cases in random order, which sometimes catches subtle inter-test dependency bugs. Defaults toFalse
.
The step can also take any of the ShellCommand
arguments, e.g., haltOnFailure
.
Unless one of tests
or testChanges
are set, the step will generate an exception.
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.5.9. Properties¶
Build properties are a generalized way to provide configuration information to build steps; see Build Properties for the conceptual overview of properties.
Some build properties come from external sources and are set before the build begins; others are set during the build, and available for later steps. The sources for properties are:
global configuration
- These properties apply to all builds.
- schedulers
- A scheduler can specify properties that become available to all builds it starts.
- changes
- A change can have properties attached to it, supplying extra information gathered by the change source.
This is most commonly used with the
sendchange
command. - forced builds
- The “Force Build” form allows users to specify properties
workers
- A worker can pass properties on to the builds it performs.
- builds
- A build automatically sets a number of properties on itself.
builders
- A builder can set properties on all the builds it runs.
- steps
- The steps of a build can set properties that are available to subsequent steps. In particular, source steps set the got_revision property.
If the same property is supplied in multiple places, the final appearance takes precedence. For example, a property set in a builder configuration will override one supplied by a scheduler.
Properties are stored internally in JSON format, so they are limited to basic types of data: numbers, strings, lists, and dictionaries.
2.5.9.1. Common Build Properties¶
The following build properties are set when the build is started, and are available to all steps.
got_revision
This property is set when a
Source
step checks out the source tree, and provides the revision that was actually obtained from the VC system. In general this should be the same asrevision
, except for non-absolute sourcestamps, wheregot_revision
indicates what revision was current when the checkout was performed. This can be used to rebuild the same source code later.Note
For some VC systems (Darcs in particular), the revision is a large string containing newlines, and is not suitable for interpolation into a filename.
For multi-codebase builds (where codebase is not the default ‘’), this property is a dictionary, keyed by codebase.
buildername
- This is a string that indicates which
Builder
the build was a part of. The combination of buildername and buildnumber uniquely identify a build.
buildnumber
- Each build gets a number, scoped to the
Builder
(so the first build performed on any givenBuilder
will have a build number of 0). This integer property contains the build’s number.
workername
- This is a string which identifies which worker the build is running on.
scheduler
- If the build was started from a scheduler, then this property will contain the name of that scheduler.
builddir
- The absolute path of the base working directory on the worker, of the current builder.
For single codebase builds, where the codebase is ‘’, the following Source Stamp Attributes are also available as properties: branch
, revision
, repository
, and project
.
2.5.9.2. Source Stamp Attributes¶
branch
revision
repository
project
codebase
For details of these attributes see Concepts.
changes
This attribute is a list of dictionaries representing the changes that make up this sourcestamp.
2.5.9.3. Using Properties in Steps¶
For the most part, properties are used to alter the behavior of build steps during a build.
This is done by using renderables (objects implementing the IRenderable
interface) as step parameters.
When the step is started, each such object is rendered using the current values of the build properties, and the resultant rendering is substituted as the actual value of the step parameter.
Buildbot offers several renderable object types covering common cases. It’s also possible to create custom renderables.
Note
Properties are defined while a build is in progress; their values are not available when the configuration file is parsed. This can sometimes confuse newcomers to Buildbot! In particular, the following is a common error:
if Property('release_train') == 'alpha':
f.addStep(...)
This does not work because the value of the property is not available when the if
statement is executed.
However, Python will not detect this as an error - you will just never see the step added to the factory.
You can use renderables in most step parameters. Please file bugs for any parameters which do not accept renderables.
Property¶
The simplest renderable is Property
, which renders to the value of the property named by its argument:
from buildbot.plugins import steps, util
f.addStep(steps.ShellCommand(command=['echo', 'buildername:',
util.Property('buildername')]))
You can specify a default value by passing a default
keyword argument:
f.addStep(steps.ShellCommand(command=['echo', 'warnings:',
util.Property('warnings', default='none')]))
The default value is used when the property doesn’t exist, or when the value is something Python regards as False
.
The defaultWhenFalse
argument can be set to False
to force buildbot to use the default argument only if the parameter is not set:
f.addStep(steps.ShellCommand(command=['echo', 'warnings:',
util.Property('warnings', default='none',
defaultWhenFalse=False)]))
The default value can be a renderable itself, e.g.,
command=util.Property('command', default=util.Property('default-command'))
Interpolate¶
Property
can only be used to replace an entire argument: in the example above, it replaces an argument to echo
.
Often, properties need to be interpolated into strings, instead.
The tool for that job is Interpolate.
The more common pattern is to use Python dictionary-style string interpolation by using the %(prop:<propname>)s
syntax.
In this form, the property name goes in the parentheses, as above.
A common mistake is to omit the trailing “s”, leading to a rather obscure error from Python (“ValueError: unsupported format character”).
from buildbot.plugins import steps, util
f.addStep(steps.ShellCommand(
command=['make',
util.Interpolate('REVISION=%(prop:got_revision)s'),
'dist']))
This example will result in a make
command with an argument like REVISION=12098
.
The syntax of dictionary-style interpolation is a selector, followed by a colon, followed by a selector specific key, optionally followed by a colon and a string indicating how to interpret the value produced by the key.
The following selectors are supported.
prop
- The key is the name of a property.
src
- The key is a codebase and source stamp attribute, separated by a colon.
Note, it is
%(src:<codebase>:<ssattr>)s
syntax, which differs from other selectors. kw
- The key refers to a keyword argument passed to
Interpolate
. Those keyword arguments may be ordinary values or renderables. secrets
- The key refers to a secret provided by a provider declared in
secretsProviders
.
The following ways of interpreting the value are available.
-replacement
- If the key exists, substitute its value; otherwise, substitute
replacement
.replacement
may be empty (%(prop:propname:-)s
). This is the default. ~replacement
- Like
-replacement
, but only substitutes the value of the key if it is something Python regards asTrue
. Python considersNone
, 0, empty lists, and the empty string to be false, so such values will be replaced byreplacement
. +replacement
- If the key exists, substitute
replacement
; otherwise, substitute an empty string.
?|sub_if_exists|sub_if_missing
#?|sub_if_true|sub_if_false
- Ternary substitution, depending on either the key being present (with
?
, similar to+
) or beingTrue
(with#?
, like~
). Notice that there is a pipe immediately following the question mark and between the two substitution alternatives. The character that follows the question mark is used as the delimiter between the two alternatives. In the above examples, it is a pipe, but any character other than(
can be used.
Note
Although these are similar to shell substitutions, no other substitutions are currently supported.
Example:
from buildbot.plugins import steps, util
f.addStep(steps.ShellCommand(
command=[
'save-build-artifacts-script.sh',
util.Interpolate('-r %(prop:repository)s'),
util.Interpolate('-b %(src::branch)s'),
util.Interpolate('-d %(kw:data)s', data="some extra needed data")
]))
Note
We use %(src::branch)s
in most of examples, because codebase
is empty by default.
Example:
from buildbot.plugins import steps, util
f.addStep(steps.ShellCommand(
command=[
'make',
util.Interpolate('REVISION=%(prop:got_revision:-%(src::revision:-unknown)s)s'),
'dist'
]))
In addition, Interpolate
supports using positional string interpolation.
Here, %s
is used as a placeholder, and the substitutions (which may be renderables), are given as subsequent arguments:
TODO
Note
Like Python, you can use either positional interpolation or dictionary-style interpolation, not both.
Thus you cannot use a string like Interpolate("foo-%(src::revision)s-%s", "branch")
.
Renderer¶
While Interpolate can handle many simple cases, and even some common conditionals, more complex cases are best handled with Python code.
The renderer
decorator creates a renderable object whose rendering is obtained by calling the decorated function when the step it’s passed to begins.
The function receives an IProperties
object, which it can use to examine the values of any and all properties.
For example:
from buildbot.plugins import steps, util
@util.renderer
def makeCommand(props):
command = ['make']
cpus = props.getProperty('CPUs')
if cpus:
command.extend(['-j', str(cpus+1)])
else:
command.extend(['-j', '2'])
command.extend([util.Interpolate('%(prop:MAKETARGET)s')])
return command
f.addStep(steps.ShellCommand(command=makeCommand))
You can think of renderer
as saying “call this function when the step starts”.
Note
Since 0.9.3, renderer can itself return IRenderable
objects or containers containing IRenderable
.
Optionally, extra arguments may be passed to the rendered function at any time by calling withArgs
on the renderable object.
The withArgs
method accepts *args
and **kwargs
arguments which are stored in a new renderable object which is returned.
The original renderable object is not modified.
Multiple withArgs
calls may be chained.
The passed *args
and **kwargs
parameters are rendered and the results are passed to the rendered function at the time it is itself rendered.
For example:
from buildbot.plugins import steps, util
@util.renderer
def makeCommand(props, target):
command = ['make']
cpus = props.getProperty('CPUs')
if cpus:
command.extend(['-j', str(cpus+1)])
else:
command.extend(['-j', '2'])
command.extend([target])
return command
f.addStep(steps.ShellCommand(command=makeCommand.withArgs('mytarget')))
Note
The rendering of the renderable object may happen at unexpected times, so it is best to ensure that the passed extra arguments are not changed.
Note
Config errors with Renderables may not always be caught via checkconfig
Transform¶
Transform
is an alternative to renderer
.
While renderer
is useful for creating new renderables, Transform
is easier to use when you want to transform or combine the renderings of preexisting ones.
Transform
takes a function and any number of positional and keyword arguments.
The function must either be a callable object or a renderable producing one.
When rendered, a Transform
first replaces all of its arguments that are renderables with their renderings, then calls the function, passing it the positional and keyword arguments, and returns the result as its own rendering.
For example, suppose my_path
is a path on the worker, and you want to get it relative to the build directory.
You can do it like this:
import os.path
from buildbot.plugins import util
my_path_rel = util.Transform(os.path.relpath, my_path, start=util.Property('builddir'))
This works whether my_path
is an ordinary string or a renderable.
my_path_rel
will be a renderable in either case, however.
FlattenList¶
If nested list should be flatten for some renderables, FlattenList could be used. For example:
from buildbot.plugins import steps, util
f.addStep(steps.ShellCommand(
command=[ 'make' ],
descriptionDone=util.FlattenList([ 'make ', [ 'done' ]])
))
descriptionDone
would be set to [ 'make', 'done' ]
when the ShellCommand
executes.
This is useful when a list-returning property is used in renderables.
Note
ShellCommand automatically flattens nested lists in its command
argument, so there is no need to use FlattenList
for it.
WithProperties¶
Warning
This class is deprecated. It is an older version of Interpolate. It exists for compatibility with older configs.
The simplest use of this class is with positional string interpolation.
Here, %s
is used as a placeholder, and property names are given as subsequent arguments:
from buildbot.plugins import steps, util
f.addStep(steps.ShellCommand(
command=["tar", "czf",
util.WithProperties("build-%s-%s.tar.gz", "branch", "revision"),
"source"]))
If this BuildStep
were used in a tree obtained from Git, it would create a tarball with a name like build-master-a7d3a333db708e786edb34b6af646edd8d4d3ad9.tar.gz
.
The more common pattern is to use Python dictionary-style string interpolation by using the %(propname)s
syntax.
In this form, the property name goes in the parentheses, as above.
A common mistake is to omit the trailing “s”, leading to a rather obscure error from Python (“ValueError: unsupported format character”).
from buildbot.plugins import steps, util
f.addStep(steps.ShellCommand(
command=['make',
util.WithProperties('REVISION=%(got_revision)s'),
'dist']))
This example will result in a make
command with an argument like REVISION=12098
.
The dictionary-style interpolation supports a number of more advanced syntaxes in the parentheses.
propname:-replacement
- If
propname
exists, substitute its value; otherwise, substitutereplacement
.replacement
may be empty (%(propname:-)s
) propname:~replacement
- Like
propname:-replacement
, but only substitutes the value of propertypropname
if it is something Python regards asTrue
. Python considersNone
, 0, empty lists, and the empty string to be false, so such values will be replaced byreplacement
. propname:+replacement
- If
propname
exists, substitutereplacement
; otherwise, substitute an empty string.
Although these are similar to shell substitutions, no other substitutions are currently supported, and replacement
in the above cannot contain more substitutions.
Note: like Python, you can use either positional interpolation or dictionary-style interpolation, not both.
Thus you cannot use a string like WithProperties("foo-%(revision)s-%s", "branch")
.
Custom Renderables¶
If the options described above are not sufficient, more complex substitutions can be achieved by writing custom renderables.
The IRenderable
interface is simple - objects must provide a getRenderingFor method.
The method should take one argument - an IProperties
provider - and should return the rendered value or a deferred firing with one.
Pass instances of the class anywhere other renderables are accepted.
For example:
import time
from buildbot.interfaces import IRenderable
from zope.interface import implementer
@implementer(IRenderable)
class DetermineFoo(object):
def getRenderingFor(self, props):
if props.hasProperty('bar'):
return props['bar']
elif props.hasProperty('baz'):
return props['baz']
return 'qux'
ShellCommand(command=['echo', DetermineFoo()])
or, more practically,
from buildbot.interfaces import IRenderable
from zope.interface import implementer
from buildbot.plugins import util
@implementer(IRenderable)
class Now(object):
def getRenderingFor(self, props):
return time.clock()
ShellCommand(command=['make', util.Interpolate('TIME=%(kw:now)s', now=Now())])
This is equivalent to:
from buildbot.plugins import util
@util.renderer
def now(props):
return time.clock()
ShellCommand(command=['make', util.Interpolate('TIME=%(kw:now)s', now=now)])
Note that a custom renderable must be instantiated (and its constructor can take whatever arguments you’d like), whereas a function decorated with renderer
can be used directly.
URL for build¶
Its common to need to use the URL for the build in a step. For this you can use a special custom renderer as following:
from buildbot.plugins import *
ShellCommand(command=['make', util.Interpolate('BUILDURL=%(kw:url)s', url=util.URLForBuild)])
Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.5.10. Build Steps¶
BuildStep
s are usually specified in the buildmaster’s configuration file, in a list that goes into the BuildFactory
.
The BuildStep
instances in this list are used as templates to construct new independent copies for each build (so that state can be kept on the BuildStep
in one build without affecting a later build).
Each BuildFactory
can be created with a list of steps, or the factory can be created empty and then steps added to it using the addStep
method:
from buildbot.plugins import util, steps
f = util.BuildFactory()
f.addSteps([
steps.SVN(repourl="http://svn.example.org/Trunk/"),
steps.ShellCommand(command=["make", "all"]),
steps.ShellCommand(command=["make", "test"])
])
The basic behavior for a BuildStep
is to:
- run for a while, then stop
- possibly invoke some RemoteCommands on the attached worker
- possibly produce a set of log files
- finish with a status described by one of four values defined in
buildbot.status.builder
:SUCCESS
,WARNINGS
,FAILURE
,SKIPPED
- provide a list of short strings to describe the step
The rest of this section describes all the standard BuildStep
objects available for use in a Build
, and the parameters which can be used to control each.
A full list of build steps is available in the Build Step Index.
2.5.10.1. Common Parameters¶
All BuildStep
s accept some common parameters.
Some of these control how their individual status affects the overall build.
Others are used to specify which Locks (see Interlocks) should be acquired before allowing the step to run.
Arguments common to all BuildStep
subclasses:
name
- the name used to describe the step on the status display. Since 0.9.8, this argument might be renderable.
haltOnFailure
- if
True
, aFAILURE
of this build step will cause the build to halt immediately. Steps withalwaysRun=True
are still run. Generally speaking,haltOnFailure
impliesflunkOnFailure
(the default for mostBuildStep
s). In some cases, particularly series of tests, it makes sense tohaltOnFailure
if something fails early on but notflunkOnFailure
. This can be achieved withhaltOnFailure=True
,flunkOnFailure=False
.
flunkOnWarnings
- when
True
, aWARNINGS
orFAILURE
of this build step will mark the overall build asFAILURE
. The remaining steps will still be executed.
flunkOnFailure
- when
True
, aFAILURE
of this build step will mark the overall build as aFAILURE
. The remaining steps will still be executed.
warnOnWarnings
- when
True
, aWARNINGS
orFAILURE
of this build step will mark the overall build as havingWARNINGS
. The remaining steps will still be executed.
warnOnFailure
- when
True
, aFAILURE
of this build step will mark the overall build as havingWARNINGS
. The remaining steps will still be executed.
alwaysRun
- if
True
, this build step will always be run, even if a previous buildstep withhaltOnFailure=True
has failed.
description
- This will be used to describe the command (on the Waterfall display) while the command is still running. It should be a single imperfect-tense verb, like compiling or testing. The preferred form is a single, short string, but for historical reasons a list of strings is also acceptable.
descriptionDone
This will be used to describe the command once it has finished. A simple noun like compile or tests should be used. Like
description
, this may either be a string or a list of short strings.If neither
description
nordescriptionDone
are set, the actual command arguments will be used to construct the description. This may be a bit too wide to fit comfortably on the Waterfall display.All subclasses of
BuildStep
will contain the description attributes. Consequently, you could add aShellCommand
step like so:from buildbot.plugins import steps f.addStep(steps.ShellCommand(command=["make", "test"], description="testing", descriptionDone="tests"))
descriptionSuffix
This is an optional suffix appended to the end of the description (ie, after
description
anddescriptionDone
). This can be used to distinguish between build steps that would display the same descriptions in the waterfall. This parameter may be a string, a list of short strings orNone
.For example, a builder might use the
Compile
step to build two different codebases. ThedescriptionSuffix
could be set to projectFoo and projectBar, respectively for each step, which will result in the full descriptions compiling projectFoo and compiling projectBar to be shown in the waterfall.
doStepIf
- A step can be configured to only run under certain conditions.
To do this, set the step’s
doStepIf
to a boolean value, or to a function that returns a boolean value or Deferred. If the value or function result is false, then the step will returnSKIPPED
without doing anything. Otherwise, the step will be executed normally. If you setdoStepIf
to a function, that function should accept one parameter, which will be theBuildStep
object itself.
hideStepIf
A step can be optionally hidden from the waterfall and build details web pages. To do this, set the step’s
hideStepIf
to a boolean value, or to a function that takes two parameters – the results and theBuildStep
– and returns a boolean value. Steps are always shown while they execute, however after the step has finished, this parameter is evaluated (if a function) and if the value is True, the step is hidden. For example, in order to hide the step if the step has been skipped:factory.addStep(Foo(..., hideStepIf=lambda results, s: results==SKIPPED))
locks
- a list of
Locks
(instances ofbuildbot.locks.WorkerLock
orbuildbot.locks.MasterLock
) that should be acquired before starting thisBuildStep
. Alternatively this could be a renderable that returns this list during build execution. This lets you defer picking the locks to acquire until the build step is about to start running. TheLocks
will be released when the step is complete. Note that this is a list of actualLock
instances, not names. Also note that all Locks must have unique names. See Interlocks.
logEncoding
- The character encoding to use to decode logs produced during the execution of this step.
This overrides the default
logEncoding
; see Log Handling.
updateBuildSummaryPolicy
The policy to use to propagate the step summary to the build summary. If False, the build summary will never include step summary If True, the build summary will always include step summary If set to a list (e.g.
[FAILURE, EXCEPTION]
), it will propagate if the step results id is present in that list. If not set or None, the default is computed according to other BuildStep parameters using following algorithm:self.updateBuildSummaryPolicy = [EXCEPTION, RETRY, CANCELLED] if self.flunkOnFailure or self.haltOnFailure or self.warnOnFailure: self.updateBuildSummaryPolicy.append(FAILURE) if self.warnOnWarnings or self.flunkOnWarnings: self.updateBuildSummaryPolicy.append(WARNINGS)
Note that in a custom step, if
BuildStep.getResultSummary
is overridden and setting thebuild
summary,updateBuildSummaryPolicy
is ignored andbuild
summary will be used regardless.
2.5.10.2. Source Checkout¶
Caution
Support for the old worker-side source checkout steps was removed in Buildbot-0.9.0.
The old source steps used to be imported like this:
from buildbot.steps.source.oldsource import Git
... Git ...
or:
from buildbot.steps.source import Git
while new source steps are in separate Python modules for each version-control system and, using the plugin infrastructure are available as:
from buildbot.plugins import steps
... steps.Git ...
Common Parameters of source checkout operations¶
All source checkout steps accept some common parameters to control how they get the sources and where they should be placed. The remaining per-VC-system parameters are mostly to specify where exactly the sources are coming from.
mode
method
These two parameters specify the means by which the source is checked out.
mode
specifies the type of checkout andmethod
tells about the way to implement it.from buildbot.plugins import steps factory = BuildFactory() factory.addStep(steps.Mercurial(repourl='path/to/repo', mode='full', method='fresh'))The
mode
parameter a string describing the kind of VC operation that is desired, defaulting toincremental
. The options are
incremental
- Update the source to the desired revision, but do not remove any other files generated by previous builds. This allows compilers to take advantage of object files from previous builds. This mode is exactly same as the old
update
mode.full
- Update the source, but delete remnants of previous builds. Build steps that follow will need to regenerate all object files.
Methods are specific to the version-control system in question, as they may take advantage of special behaviors in that version-control system that can make checkouts more efficient or reliable.
workdir
- like all Steps, this indicates the directory where the build will take place. Source Steps are special in that they perform some operations outside of the workdir (like creating the workdir itself).
alwaysUseLatest
- if True, bypass the usual behavior of checking out the revision in the source stamp, and always update to the latest revision in the repository instead.
If the specific VC system supports branches and a specific branch is specified in the step parameters via
branch
ordefaultBranch
parameters then the latest revision on that branch is checked out. retry
- If set, this specifies a tuple of
(delay, repeats)
which means that when a full VC checkout fails, it should be retried up torepeats
times, waitingdelay
seconds between attempts. If you don’t provide this, it defaults toNone
, which means VC operations should not be retried. This is provided to make life easier for workers which are stuck behind poor network connections. repository
The name of this parameter might vary depending on the Source step you are running. The concept explained here is common to all steps and applies to
repourl
as well as forbaseURL
(when applicable).A common idiom is to pass
Property('repository', 'url://default/repo/path')
as repository. This grabs the repository from the source stamp of the build. This can be a security issue, if you allow force builds from the web, or have theWebStatus
change hooks enabled; as the worker will download code from an arbitrary repository.codebase
- This specifies which codebase the source step should use to select the right source stamp.
The default codebase value is
''
. The codebase must correspond to a codebase assigned by thecodebaseGenerator
. If there is no codebaseGenerator defined in the master then codebase doesn’t need to be set, the default value will then match all changes. timeout
- Specifies the timeout for worker-side operations, in seconds. If your repositories are particularly large, then you may need to increase this value from its default of 1200 (20 minutes).
logEnviron
- If this option is true (the default), then the step’s logfile will describe the environment variables on the worker.
In situations where the environment is not relevant and is long, it may be easier to set
logEnviron=False
. env
- a dictionary of environment strings which will be added to the child command’s environment. The usual property interpolations can be used in environment variable names and values - see Properties.
Mercurial¶
-
class
buildbot.steps.source.mercurial.
Mercurial
¶
The Mercurial
build step performs a Mercurial (aka hg
) checkout or update.
Branches are available in two modes: dirname
, where the name of the branch is a suffix of the name of the repository, or inrepo
, which uses Hg’s named-branches support.
Make sure this setting matches your changehook, if you have that installed.
from buildbot.plugins import steps
factory.addStep(steps.Mercurial(repourl='path/to/repo', mode='full',
method='fresh', branchType='inrepo'))
The Mercurial step takes the following arguments:
repourl
- where the Mercurial source repository is available.
defaultBranch
- this specifies the name of the branch to use when a Build does not provide one of its own.
This will be appended to
repourl
to create the string that will be passed to thehg clone
command. IfalwaysUseLatest
isTrue
then the branch and revision information that comes with the Build is ignored and branch specified in this parameter is used. branchType
- either ‘dirname’ (default) or ‘inrepo’ depending on whether the branch name should be appended to the
repourl
or the branch is a Mercurial named branch and can be found within therepourl
. clobberOnBranchChange
- boolean, defaults to
True
. If set and using inrepos branches, clobber the tree at each branch change. Otherwise, just update to the branch.
mode
method
Mercurial’s incremental mode does not require a method. The full mode has three methods defined:
clobber
- It removes the build directory entirely then makes full clone from repo. This can be slow as it need to clone whole repository
fresh
- This remove all other files except those tracked by VCS. First it does hg purge --all then pull/update
clean
- All the files which are tracked by Mercurial and listed ignore files are not deleted. Remaining all other files will be deleted before pull/update. This is equivalent to hg purge then pull/update.
Git¶
-
class
buildbot.steps.source.git.
Git
¶
The Git
build step clones or updates a Git repository and checks out the specified branch or revision.
Note
The Buildbot supports Git version 1.2.0 and later: earlier versions (such as the one shipped in Ubuntu ‘Dapper’) do not support the git init command that the Buildbot uses.
from buildbot.plugins import steps
factory.addStep(steps.Git(repourl='git://path/to/repo', mode='full',
method='clobber', submodules=True))
The Git step takes the following arguments:
repourl
(required)- The URL of the upstream Git repository.
branch
(optional)- This specifies the name of the branch or the tag to use when a Build does not provide one of its own.
If this parameter is not specified, and the Build does not provide a branch, the default branch of the remote repository will be used.
If
alwaysUseLatest
isTrue
then the branch and revision information that comes with the Build is ignored and branch specified in this parameter is used. submodules
(optional, default:False
)- When initializing/updating a Git repository, this tells Buildbot whether to handle Git submodules.
shallow
(optional)- Instructs Git to attempt shallow clones (
--depth 1
). The depth defaults to 1 and can be changed by passing an integer instead ofTrue
. This option can be used only in full builds with clobber method. reference
(optional)- Use the specified string as a path to a reference repository on the local machine. Git will try to grab objects from this path first instead of the main repository, if they exist.
origin
(optional)- By default, any clone will use the name “origin” as the remote repository (eg, “origin/master”). This renderable option allows that to be configured to an alternate name.
progress
(optional)- Passes the (
--progress
) flag to (git fetch). This solves issues of long fetches being killed due to lack of output, but requires Git 1.7.2 or later. retryFetch
(optional, default:False
)- If true, if the
git fetch
fails then Buildbot retries to fetch again instead of failing the entire source checkout. clobberOnFailure
(optional, default:False
)- If a fetch or full clone fails we can checkout source removing everything. This way new repository will be cloned. If retry fails it fails the source checkout step.
mode
(optional, default:'incremental'
)Specifies whether to clean the build tree or not.
incremental
- The source is update, but any built files are left untouched.
full
- The build tree is clean of any built files.
The exact method for doing this is controlled by the
method
argument.
method
(optional, default:fresh
when mode isfull
)Git’s incremental mode does not require a method. The full mode has four methods defined:
clobber
- It removes the build directory entirely then makes full clone from repo.
This can be slow as it need to clone whole repository.
To make faster clones enable
shallow
option. If shallow options is enabled and build request have unknown revision value, then this step fails. fresh
- This remove all other files except those tracked by Git.
First it does git clean -d -f -f -x then fetch/checkout to a specified revision(if any).
This option is equal to update mode with
ignore_ignores=True
in old steps. clean
- All the files which are tracked by Git and listed ignore files are not deleted.
Remaining all other files will be deleted before fetch/checkout.
This is equivalent to git clean -d -f -f then fetch.
This is equivalent to
ignore_ignores=False
in old steps. copy
- This first checkout source into source directory then copy the
source
directory tobuild
directory then performs the build operation in the copied directory. This way we make fresh builds with very less bandwidth to download source. The behavior of source checkout follows exactly same as incremental. It performs all the incremental checkout behavior insource
directory.
getDescription
(optional)After checkout, invoke a git describe on the revision and save the result in a property; the property’s name is either
commit-description
orcommit-description-foo
, depending on whether thecodebase
argument was also provided. The argument should either be abool
ordict
, and will change how git describe is called:getDescription=False
: disables this feature explicitlygetDescription=True
or emptydict()
: Run git describe with no argsgetDescription={...}
: a dict with keys named the same as the Git option. Each key’s value can beFalse
orNone
to explicitly skip that argument.For the following keys, a value of
True
appends the same-named Git argument:all
: –allalways
: –alwayscontains
: –containsdebug
: –debuglong
: –long`exact-match
: –exact-matchtags
: –tagsdirty
: –dirty
For the following keys, an integer or string value (depending on what Git expects) will set the argument’s parameter appropriately. Examples show the key-value pair:
match=foo
: –match fooabbrev=7
: –abbrev=7candidates=7
: –candidates=7dirty=foo
: –dirty=foo
config
(optional)- A dict of Git configuration settings to pass to the remote Git commands.
sshPrivateKey
(optional)- The private key to use when running Git for fetch operations. The ssh utility must be in the system path in order to use this option. On Windows only Git distribution that embeds MINGW has been tested (as of July 2017 the official distribution is MINGW-based). The worker must either have the host in the known hosts file or the host key must be specified via the sshHostKey option.
sshHostKey
(optional)- Specifies public host key to match when authenticating with SSH public key authentication. This may be either a Secret or just a string. sshPrivateKey must be specified in order to use this option. The host key must be in the form of <key type> <base64-encoded string>, e.g. ssh-rsa AAAAB3N<…>FAaQ==.
sshKnownHosts
(optional)- Specifies the contents of the SSH known_hosts file to match when authenticating with SSH public key authentication. This may be either a Secret or just a string. sshPrivateKey must be specified in order to use this option. sshHostKey must not be specified in order to use this option.
SVN¶
-
class
buildbot.steps.source.svn.
SVN
¶
The SVN
build step performs a Subversion checkout or update.
There are two basic ways of setting up the checkout step, depending upon whether you are using multiple branches or not.
The SVN
step should be created with the repourl
argument:
repourl
- (required): this specifies the
URL
argument that will be given to the svn checkout command. It dictates both where the repository is located and which sub-tree should be extracted. One way to specify the branch is to useInterpolate
. For example, if you wanted to check out the trunk repository, you could userepourl=Interpolate("http://svn.example.com/repos/%(src::branch)s")
. Alternatively, if you are using a remote Subversion repository which is accessible through HTTP at a URL ofhttp://svn.example.com/repos
, and you wanted to check out thetrunk/calc
sub-tree, you would directly userepourl="http://svn.example.com/repos/trunk/calc"
as an argument to yourSVN
step.
If you are building from multiple branches, then you should create the SVN
step with the repourl
and provide branch information with Interpolate:
from buildbot.plugins import steps, util
factory.addStep(steps.SVN(mode='incremental',
repourl=util.Interpolate('svn://svn.example.org/svn/%(src::branch)s/myproject')))
Alternatively, the repourl
argument can be used to create the SVN
step without Interpolate:
from buildbot.plugins import steps
factory.addStep(steps.SVN(mode='full',
repourl='svn://svn.example.org/svn/myproject/trunk'))
username
- (optional): if specified, this will be passed to the
svn
binary with a--username
option. password
- (optional): if specified, this will be passed to the
svn
binary with a--password
option. extra_args
- (optional): if specified, an array of strings that will be passed as extra arguments to the
svn
binary. keep_on_purge
- (optional): specific files or directories to keep between purges, like some build outputs that can be reused between builds.
depth
(optional): Specify depth argument to achieve sparse checkout. Only available if worker has Subversion 1.5 or higher.
If set to
empty
updates will not pull in any files or subdirectories not already present. If set tofiles
, updates will pull in any files not already present, but not directories. If set toimmediates
, updates will pull in any files or subdirectories not already present, the new subdirectories will have depth: empty. If set toinfinity
, updates will pull in any files or subdirectories not already present; the new subdirectories will have depth-infinity. Infinity is equivalent to SVN default update behavior, without specifying any depth argument.preferLastChangedRev
- (optional): By default, the
got_revision
property is set to the repository’s global revision (“Revision” in the svn info output). Set this parameter toTrue
to have it set to the “Last Changed Rev” instead.
mode
method
SVN’s incremental mode does not require a method. The full mode has five methods defined:
clobber
- It removes the working directory for each build then makes full checkout.
fresh
- This always always purges local changes before updating. This deletes unversioned files and reverts everything that would appear in a svn status --no-ignore. This is equivalent to the old update mode with
always_purge
.clean
- This is same as fresh except that it deletes all unversioned files generated by svn status.
copy
- This first checkout source into source directory then copy the
source
directory tobuild
directory then performs the build operation in the copied directory. This way we make fresh builds with very less bandwidth to download source. The behavior of source checkout follows exactly same as incremental. It performs all the incremental checkout behavior insource
directory.export
- Similar to
method='copy'
, except usingsvn export
to create build directory so that there are no.svn
directories in the build directory.
If you are using branches, you must also make sure your ChangeSource
will report the correct branch names.
CVS¶
-
class
buildbot.steps.source.cvs.
CVS
¶
The CVS
build step performs a CVS checkout or update.
from buildbot.plugins import steps
factory.addStep(steps.CVS(mode='incremental',
cvsroot=':pserver:me@cvs.example.net:/cvsroot/myproj',
cvsmodule='buildbot'))
This step takes the following arguments:
cvsroot
- (required): specify the CVSROOT value, which points to a CVS repository, probably on a remote machine.
For example, if Buildbot was hosted in CVS then the CVSROOT value you would use to get a copy of the Buildbot source code might be
:pserver:anonymous@cvs.example.net:/cvsroot/buildbot
. cvsmodule
- (required): specify the cvs
module
, which is generally a subdirectory of theCVSROOT
. The cvsmodule for the Buildbot source code isbuildbot
. branch
- a string which will be used in a
-r
argument. This is most useful for specifying a branch to work on. Defaults toHEAD
. IfalwaysUseLatest
isTrue
then the branch and revision information that comes with the Build is ignored and branch specified in this parameter is used. global_options
- a list of flags to be put before the argument
checkout
in the CVS command. extra_options
- a list of flags to be put after the
checkout
in the CVS command.
mode
method
No method is needed for incremental mode. For full mode,
method
can take the values shown below. If no value is given, it defaults tofresh
.
clobber
- This specifies to remove the
workdir
and make a full checkout.fresh
- This method first runs
cvsdisard
in the build directory, then updates it. This requirescvsdiscard
which is a part of the cvsutil package.clean
- This method is the same as
method='fresh'
, but it runscvsdiscard --ignore
instead ofcvsdiscard
.copy
- This maintains a
source
directory for source, which it updates copies to the build directory. This allows Buildbot to start with a fresh directory, without downloading the entire repository on every build.
login
- Password to use while performing login to the remote CVS server.
Default is
None
meaning that no login needs to be performed.
Bzr¶
-
class
buildbot.steps.source.bzr.
Bzr
¶
bzr is a descendant of Arch/Baz, and is frequently referred to as simply Bazaar. The repository-vs-workspace model is similar to Darcs, but it uses a strictly linear sequence of revisions (one history per branch) like Arch. Branches are put in subdirectories. This makes it look very much like Mercurial.
from buildbot.plugins import steps
factory.addStep(steps.Bzr(mode='incremental',
repourl='lp:~knielsen/maria/tmp-buildbot-test'))
The step takes the following arguments:
repourl
- (required unless
baseURL
is provided): the URL at which the Bzr source repository is available. baseURL
- (required unless
repourl
is provided): the base repository URL, to which a branch name will be appended. It should probably end in a slash. defaultBranch
- (allowed if and only if
baseURL
is provided): this specifies the name of the branch to use when a Build does not provide one of its own. This will be appended tobaseURL
to create the string that will be passed to thebzr checkout
command. IfalwaysUseLatest
isTrue
then the branch and revision information that comes with the Build is ignored and branch specified in this parameter is used.
mode
method
No method is needed for incremental mode. For full mode,
method
can take the values shown below. If no value is given, it defaults tofresh
.
clobber
- This specifies to remove the
workdir
and make a full checkout.fresh
- This method first runs
bzr clean-tree
to remove all the unversioned files thenupdate
the repo. This remove all unversioned files including those in .bzrignore.clean
- This is same as fresh except that it doesn’t remove the files mentioned in
.bzrginore
i.e, by runningbzr clean-tree --ignore
.copy
- A local bzr repository is maintained and the repo is copied to
build
directory for each build. Before each build the local bzr repo is updated then copied tobuild
for next steps.
P4¶
-
class
buildbot.steps.source.p4.
P4
¶
The P4
build step creates a Perforce client specification and performs an update.
from buildbot.plugins import steps, util
factory.addStep(steps.P4(p4port=p4port,
p4client=util.WithProperties('%(P4USER)s-%(workername)s-%(buildername)s'),
p4user=p4user,
p4base='//depot',
p4viewspec=p4viewspec,
mode='incremental'))
You can specify the client spec in two different ways.
You can use the p4base
, p4branch
, and (optionally) p4extra_views
to build up the viewspec, or you can utilize the p4viewspec
to specify the whole viewspec as a set of tuples.
Using p4viewspec
will allow you to add lines such as:
//depot/branch/mybranch/... //<p4client>/...
-//depot/branch/mybranch/notthisdir/... //<p4client>/notthisdir/...
If you specify p4viewspec
and any of p4base
, p4branch
, and/or p4extra_views
you will receive a configuration error exception.
p4base
- A view into the Perforce depot without branch name or trailing
/...
. Typically//depot/proj
. p4branch
- (optional): A single string, which is appended to the p4base as follows
<p4base>/<p4branch>/...
to form the first line in the viewspec p4extra_views
- (optional): a list of
(depotpath, clientpath)
tuples containing extra views to be mapped into the client specification. Both will have/...
appended automatically. The client name and source directory will be prepended to the client path. p4viewspec
This will override any p4branch, p4base, and/or p4extra_views specified. The viewspec will be an array of tuples as follows:
[('//depot/main/','')]
It yields a viewspec with just:
//depot/main/... //<p4client>/...
p4viewspec_suffix
(optional): The
p4viewspec
lets you customize the client spec for a builder but, as the previous example shows, it automatically adds...
at the end of each line. If you need to also specify file-level remappings, you can set thep4viewspec_suffix
toNone
so that nothing is added to your viewspec:[('//depot/main/...', '...'), ('-//depot/main/config.xml', 'config.xml'), ('//depot/main/config.vancouver.xml', 'config.xml')]
It yields a viewspec with:
//depot/main/... //<p4client>/... -//depot/main/config.xml //<p4client/main/config.xml //depot/main/config.vancouver.xml //<p4client>/main/config.xml
Note how, with
p4viewspec_suffix
set toNone
, you need to manually add...
where you need it.p4client_spec_options
- (optional): By default, clients are created with the
allwrite rmdir
options. This string lets you change that. p4port
- (optional): the
host:port
string describing how to get to the P4 Depot (repository), used as the option -p argument for all p4 commands. p4user
- (optional): the Perforce user, used as the option -u argument to all p4 commands.
p4passwd
- (optional): the Perforce password, used as the option -p argument to all p4 commands.
p4client
- (optional): The name of the client to use.
In
mode='full'
andmode='incremental'
, it’s particularly important that a unique name is used for each checkout directory to avoid incorrect synchronization. For this reason, Python percent substitution will be performed on this value to replace%(prop:workername)s
with the worker name and%(prop:buildername)s
with the builder name. The default isbuildbot_%(prop:workername)s_%(prop:buildername)s
. p4line_end
- (optional): The type of line ending handling P4 should use.
This is added directly to the client spec’s
LineEnd
property. The default islocal
. p4extra_args
(optional): Extra arguments to be added to the P4 command-line for the
sync
command. So for instance if you want to sync only to populate a Perforce proxy (without actually syncing files to disk), you can do:P4(p4extra_args=['-Zproxyload'], ...)
use_tickets
- Set to
True
to use ticket-based authentication, instead of passwords (but you still need to specifyp4passwd
).
Repo¶
-
class
buildbot.steps.source.repo.
Repo
¶
The Repo
build step performs a Repo init and sync.
The Repo step takes the following arguments:
manifestURL
- (required): the URL at which the Repo’s manifests source repository is available.
manifestBranch
- (optional, defaults to
master
): the manifest repository branch on which repo will take its manifest. Corresponds to the-b
argument to the repo init command. manifestFile
- (optional, defaults to
default.xml
): the manifest filename. Corresponds to the-m
argument to the repo init command. tarball
(optional, defaults to
None
): the repo tarball used for fast bootstrap. If not present the tarball will be created automatically after first sync. It is a copy of the.repo
directory which contains all the Git objects. This feature helps to minimize network usage on very big projects with lots of workers.The suffix of the tarball determines if the tarball is compressed and which compressor is chosen. Supported suffixes are
bz2
,gz
,lzma
,lzop
, andpigz
.jobs
- (optional, defaults to
None
): Number of projects to fetch simultaneously while syncing. Passed to repo sync subcommand with “-j”. syncAllBranches
- (optional, defaults to
False
): renderable boolean to control whetherrepo
syncs all branches. I.e.repo sync -c
depth
- (optional, defaults to 0): Depth argument passed to repo init. Specifies the amount of git history to store. A depth of 1 is useful for shallow clones. This can save considerable disk space on very large projects.
updateTarballAge
- (optional, defaults to “one week”): renderable to control the policy of updating of the tarball given properties.
Returns: max age of tarball in seconds, or
None
, if we want to skip tarball update. The default value should be good trade off on size of the tarball, and update frequency compared to cost of tarball creation repoDownloads
(optional, defaults to None): list of
repo download
commands to perform at the end of the Repo step each string in the list will be prefixedrepo download
, and run as is. This means you can include parameter in the string. For example:["-c project 1234/4"]
will cherry-pick patchset 4 of patch 1234 in projectproject
["-f project 1234/4"]
will enforce fast-forward on patchset 4 of patch 1234 in projectproject
-
class
buildbot.steps.source.repo.
RepoDownloadsFromProperties
¶
util.repo.DownloadsFromProperties
can be used as a renderable of the repoDownload
parameter it will look in passed properties for string with following possible format:
repo download project change_number/patchset_number
project change_number/patchset_number
project/change_number/patchset_number
All of these properties will be translated into a repo download. This feature allows integrators to build with several pending interdependent changes, which at the moment cannot be described properly in Gerrit, and can only be described by humans.
-
class
buildbot.steps.source.repo.
RepoDownloadsFromChangeSource
¶
util.repo.DownloadsFromChangeSource
can be used as a renderable of the repoDownload
parameter
This rendereable integrates with GerritChangeSource
, and will automatically use the repo download command of repo to download the additional changes introduced by a pending changeset.
Note
You can use the two above Rendereable in conjunction by using the class buildbot.process.properties.FlattenList
For example:
from buildbot.plugins import steps, util
factory.addStep(steps.Repo(manifestURL='git://gerrit.example.org/manifest.git',
repoDownloads=util.