Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
2.5.10. Build Steps¶
BuildStep
s are usually specified in the buildmaster’s configuration file, in a list that goes into the BuildFactory
.
The BuildStep
instances in this list are used as templates to construct new independent copies for each build (so that state can be kept on the BuildStep
in one build without affecting a later build).
Each BuildFactory
can be created with a list of steps, or the factory can be created empty and then steps added to it using the addStep
method:
from buildbot.plugins import util, steps
f = util.BuildFactory()
f.addSteps([
steps.SVN(repourl="http://svn.example.org/Trunk/"),
steps.ShellCommand(command=["make", "all"]),
steps.ShellCommand(command=["make", "test"])
])
The basic behavior for a BuildStep
is to:
- run for a while, then stop
- possibly invoke some RemoteCommands on the attached worker
- possibly produce a set of log files
- finish with a status described by one of four values defined in
buildbot.status.builder
:SUCCESS
,WARNINGS
,FAILURE
,SKIPPED
- provide a list of short strings to describe the step
The rest of this section describes all the standard BuildStep
objects available for use in a Build
, and the parameters which can be used to control each.
A full list of build steps is available in the Build Step Index.
2.5.10.1. Common Parameters¶
All BuildStep
s accept some common parameters.
Some of these control how their individual status affects the overall build.
Others are used to specify which Locks (see Interlocks) should be acquired before allowing the step to run.
Arguments common to all BuildStep
subclasses:
name
- the name used to describe the step on the status display. Since 0.9.8, this argument might be renderable.
haltOnFailure
- if
True
, aFAILURE
of this build step will cause the build to halt immediately. Steps withalwaysRun=True
are still run. Generally speaking,haltOnFailure
impliesflunkOnFailure
(the default for mostBuildStep
s). In some cases, particularly series of tests, it makes sense tohaltOnFailure
if something fails early on but notflunkOnFailure
. This can be achieved withhaltOnFailure=True
,flunkOnFailure=False
.
flunkOnWarnings
- when
True
, aWARNINGS
orFAILURE
of this build step will mark the overall build asFAILURE
. The remaining steps will still be executed.
flunkOnFailure
- when
True
, aFAILURE
of this build step will mark the overall build as aFAILURE
. The remaining steps will still be executed.
warnOnWarnings
- when
True
, aWARNINGS
orFAILURE
of this build step will mark the overall build as havingWARNINGS
. The remaining steps will still be executed.
warnOnFailure
- when
True
, aFAILURE
of this build step will mark the overall build as havingWARNINGS
. The remaining steps will still be executed.
alwaysRun
- if
True
, this build step will always be run, even if a previous buildstep withhaltOnFailure=True
has failed.
description
- This will be used to describe the command (on the Waterfall display) while the command is still running. It should be a single imperfect-tense verb, like compiling or testing. The preferred form is a single, short string, but for historical reasons a list of strings is also acceptable.
descriptionDone
This will be used to describe the command once it has finished. A simple noun like compile or tests should be used. Like
description
, this may either be a string or a list of short strings.If neither
description
nordescriptionDone
are set, the actual command arguments will be used to construct the description. This may be a bit too wide to fit comfortably on the Waterfall display.All subclasses of
BuildStep
will contain the description attributes. Consequently, you could add aShellCommand
step like so:from buildbot.plugins import steps f.addStep(steps.ShellCommand(command=["make", "test"], description="testing", descriptionDone="tests"))
descriptionSuffix
This is an optional suffix appended to the end of the description (ie, after
description
anddescriptionDone
). This can be used to distinguish between build steps that would display the same descriptions in the waterfall. This parameter may be a string, a list of short strings orNone
.For example, a builder might use the
Compile
step to build two different codebases. ThedescriptionSuffix
could be set to projectFoo and projectBar, respectively for each step, which will result in the full descriptions compiling projectFoo and compiling projectBar to be shown in the waterfall.
doStepIf
- A step can be configured to only run under certain conditions.
To do this, set the step’s
doStepIf
to a boolean value, or to a function that returns a boolean value or Deferred. If the value or function result is false, then the step will returnSKIPPED
without doing anything. Otherwise, the step will be executed normally. If you setdoStepIf
to a function, that function should accept one parameter, which will be theStep
object itself.
hideStepIf
A step can be optionally hidden from the waterfall and build details web pages. To do this, set the step’s
hideStepIf
to a boolean value, or to a function that takes two parameters – the results and theBuildStep
– and returns a boolean value. Steps are always shown while they execute, however after the step has finished, this parameter is evaluated (if a function) and if the value is True, the step is hidden. For example, in order to hide the step if the step has been skipped:factory.addStep(Foo(..., hideStepIf=lambda results, s: results==SKIPPED))
locks
- a list of
Locks
(instances ofbuildbot.locks.WorkerLock
orbuildbot.locks.MasterLock
) that should be acquired before starting thisBuildStep
. Alternatively this could be a renderable that returns this list during build execution. This lets you defer picking the locks to acquire until the build step is about to start running. TheLocks
will be released when the step is complete. Note that this is a list of actualLock
instances, not names. Also note that all Locks must have unique names. See Interlocks.
logEncoding
- The character encoding to use to decode logs produced during the execution of this step.
This overrides the default
logEncoding
; see Log Handling.
updateBuildSummaryPolicy
The policy to use to propagate the step summary to the build summary. If False, the build summary will never include step summary If True, the build summary will always include step summary If set to a list (e.g.
[FAILURE, EXCEPTION]
), it will propagate if the step results id is present in that list. If not set or None, the default is computed according to other BuildStep parameters using following algorithm:self.updateBuildSummaryPolicy = [EXCEPTION, RETRY, CANCELLED] if self.flunkOnFailure or self.haltOnFailure or self.warnOnFailure: self.updateBuildSummaryPolicy.append(FAILURE) if self.warnOnWarnings or self.flunkOnWarnings: self.updateBuildSummaryPolicy.append(WARNINGS)
Note that in a custom step, if
BuildStep.getResultSummary
is overridden and setting thebuild
summary,updateBuildSummaryPolicy
is ignored andbuild
summary will be used regardless.
2.5.10.2. Source Checkout¶
Caution
Support for the old worker-side source checkout steps was removed in Buildbot-0.9.0.
The old source steps used to be imported like this:
from buildbot.steps.source.oldsource import Git
... Git ...
or:
from buildbot.steps.source import Git
while new source steps are in separate Python modules for each version-control system and, using the plugin infrastructure are available as:
from buildbot.plugins import steps
... steps.Git ...
Common Parameters of source checkout operations¶
All source checkout steps accept some common parameters to control how they get the sources and where they should be placed. The remaining per-VC-system parameters are mostly to specify where exactly the sources are coming from.
mode
method
These two parameters specify the means by which the source is checked out.
mode
specifies the type of checkout andmethod
tells about the way to implement it.from buildbot.plugins import steps factory = BuildFactory() factory.addStep(steps.Mercurial(repourl='path/to/repo', mode='full', method='fresh'))The
mode
parameter a string describing the kind of VC operation that is desired, defaulting toincremental
. The options are
incremental
- Update the source to the desired revision, but do not remove any other files generated by previous builds. This allows compilers to take advantage of object files from previous builds. This mode is exactly same as the old
update
mode.full
- Update the source, but delete remnants of previous builds. Build steps that follow will need to regenerate all object files.
Methods are specific to the version-control system in question, as they may take advantage of special behaviors in that version-control system that can make checkouts more efficient or reliable.
workdir
- like all Steps, this indicates the directory where the build will take place. Source Steps are special in that they perform some operations outside of the workdir (like creating the workdir itself).
alwaysUseLatest
- if True, bypass the usual behavior of checking out the revision in the source stamp, and always update to the latest revision in the repository instead.
If the specific VC system supports branches and a specific branch is specified in the step parameters via
branch
ordefaultBranch
parameters then the latest revision on that branch is checked out. retry
- If set, this specifies a tuple of
(delay, repeats)
which means that when a full VC checkout fails, it should be retried up torepeats
times, waitingdelay
seconds between attempts. If you don’t provide this, it defaults toNone
, which means VC operations should not be retried. This is provided to make life easier for workers which are stuck behind poor network connections. repository
The name of this parameter might vary depending on the Source step you are running. The concept explained here is common to all steps and applies to
repourl
as well as forbaseURL
(when applicable).A common idiom is to pass
Property('repository', 'url://default/repo/path')
as repository. This grabs the repository from the source stamp of the build. This can be a security issue, if you allow force builds from the web, or have theWebStatus
change hooks enabled; as the worker will download code from an arbitrary repository.codebase
- This specifies which codebase the source step should use to select the right source stamp.
The default codebase value is
''
. The codebase must correspond to a codebase assigned by thecodebaseGenerator
. If there is no codebaseGenerator defined in the master then codebase doesn’t need to be set, the default value will then match all changes. timeout
- Specifies the timeout for worker-side operations, in seconds. If your repositories are particularly large, then you may need to increase this value from its default of 1200 (20 minutes).
logEnviron
- If this option is true (the default), then the step’s logfile will describe the environment variables on the worker.
In situations where the environment is not relevant and is long, it may be easier to set
logEnviron=False
. env
- a dictionary of environment strings which will be added to the child command’s environment. The usual property interpolations can be used in environment variable names and values - see Properties.
Mercurial¶
-
class
buildbot.steps.source.mercurial.
Mercurial
¶
The Mercurial
build step performs a Mercurial (aka hg
) checkout or update.
Branches are available in two modes: dirname
, where the name of the branch is a suffix of the name of the repository, or inrepo
, which uses Hg’s named-branches support.
Make sure this setting matches your changehook, if you have that installed.
from buildbot.plugins import steps
factory.addStep(steps.Mercurial(repourl='path/to/repo', mode='full',
method='fresh', branchType='inrepo'))
The Mercurial step takes the following arguments:
repourl
- where the Mercurial source repository is available.
defaultBranch
- this specifies the name of the branch to use when a Build does not provide one of its own.
This will be appended to
repourl
to create the string that will be passed to thehg clone
command. IfalwaysUseLatest
isTrue
then the branch and revision information that comes with the Build is ignored and branch specified in this parameter is used. branchType
- either ‘dirname’ (default) or ‘inrepo’ depending on whether the branch name should be appended to the
repourl
or the branch is a Mercurial named branch and can be found within therepourl
. clobberOnBranchChange
- boolean, defaults to
True
. If set and using inrepos branches, clobber the tree at each branch change. Otherwise, just update to the branch.
mode
method
Mercurial’s incremental mode does not require a method. The full mode has three methods defined:
clobber
- It removes the build directory entirely then makes full clone from repo. This can be slow as it need to clone whole repository
fresh
- This remove all other files except those tracked by VCS. First it does hg purge --all then pull/update
clean
- All the files which are tracked by Mercurial and listed ignore files are not deleted. Remaining all other files will be deleted before pull/update. This is equivalent to hg purge then pull/update.
Git¶
-
class
buildbot.steps.source.git.
Git
¶
The Git
build step clones or updates a Git repository and checks out the specified branch or revision.
Note
The Buildbot supports Git version 1.2.0 and later: earlier versions (such as the one shipped in Ubuntu ‘Dapper’) do not support the git init command that the Buildbot uses.
from buildbot.plugins import steps
factory.addStep(steps.Git(repourl='git://path/to/repo', mode='full',
method='clobber', submodules=True))
The Git step takes the following arguments:
repourl
(required)- The URL of the upstream Git repository.
branch
(optional)- This specifies the name of the branch or the tag to use when a Build does not provide one of its own.
If this parameter is not specified, and the Build does not provide a branch, the default branch of the remote repository will be used.
If
alwaysUseLatest
isTrue
then the branch and revision information that comes with the Build is ignored and branch specified in this parameter is used. submodules
(optional, default:False
)- When initializing/updating a Git repository, this tells Buildbot whether to handle Git submodules.
shallow
(optional)- Instructs Git to attempt shallow clones (
--depth 1
). The depth defaults to 1 and can be changed by passing an integer instead ofTrue
. This option can be used only in full builds with clobber method. reference
(optional)- Use the specified string as a path to a reference repository on the local machine. Git will try to grab objects from this path first instead of the main repository, if they exist.
origin
(optional)- By default, any clone will use the name “origin” as the remote repository (eg, “origin/master”). This renderable option allows that to be configured to an alternate name.
progress
(optional)- Passes the (
--progress
) flag to (git fetch). This solves issues of long fetches being killed due to lack of output, but requires Git 1.7.2 or later. retryFetch
(optional, default:False
)- If true, if the
git fetch
fails then Buildbot retries to fetch again instead of failing the entire source checkout. clobberOnFailure
(optional, default:False
)- If a fetch or full clone fails we can checkout source removing everything. This way new repository will be cloned. If retry fails it fails the source checkout step.
mode
(optional, default:'incremental'
)Specifies whether to clean the build tree or not.
incremental
- The source is update, but any built files are left untouched.
full
- The build tree is clean of any built files.
The exact method for doing this is controlled by the
method
argument.
method
(optional, default:fresh
when mode isfull
)Git’s incremental mode does not require a method. The full mode has four methods defined:
clobber
- It removes the build directory entirely then makes full clone from repo.
This can be slow as it need to clone whole repository.
To make faster clones enable
shallow
option. If shallow options is enabled and build request have unknown revision value, then this step fails. fresh
- This remove all other files except those tracked by Git.
First it does git clean -d -f -f -x then fetch/checkout to a specified revision(if any).
This option is equal to update mode with
ignore_ignores=True
in old steps. clean
- All the files which are tracked by Git and listed ignore files are not deleted.
Remaining all other files will be deleted before fetch/checkout.
This is equivalent to git clean -d -f -f then fetch.
This is equivalent to
ignore_ignores=False
in old steps. copy
- This first checkout source into source directory then copy the
source
directory tobuild
directory then performs the build operation in the copied directory. This way we make fresh builds with very less bandwidth to download source. The behavior of source checkout follows exactly same as incremental. It performs all the incremental checkout behavior insource
directory.
getDescription
(optional)After checkout, invoke a git describe on the revision and save the result in a property; the property’s name is either
commit-description
orcommit-description-foo
, depending on whether thecodebase
argument was also provided. The argument should either be abool
ordict
, and will change how git describe is called:getDescription=False
: disables this feature explicitlygetDescription=True
or emptydict()
: Run git describe with no argsgetDescription={...}
: a dict with keys named the same as the Git option. Each key’s value can beFalse
orNone
to explicitly skip that argument.For the following keys, a value of
True
appends the same-named Git argument:all
: –allalways
: –alwayscontains
: –containsdebug
: –debuglong
: –long`exact-match
: –exact-matchtags
: –tagsdirty
: –dirty
For the following keys, an integer or string value (depending on what Git expects) will set the argument’s parameter appropriately. Examples show the key-value pair:
match=foo
: –match fooabbrev=7
: –abbrev=7candidates=7
: –candidates=7dirty=foo
: –dirty=foo
config
(optional)- A dict of Git configuration settings to pass to the remote Git commands.
sshPrivateKey
(optional)- The private key to use when running Git for fetch operations. The ssh utility must be in the system path in order to use this option. On Windows only Git distribution that embeds MINGW has been tested (as of July 2017 the official distribution is MINGW-based). The worker must either have the host in the known hosts file or the host key must be specified via the sshHostKey option.
sshHostKey
(optional)- Specifies public host key to match when authenticating with SSH public key authentication. This may be either a Secret or just a string. sshPrivateKey must be specified in order to use this option. The host key must be in the form of <key type> <base64-encoded string>, e.g. ssh-rsa AAAAB3N<…>FAaQ==.
SVN¶
-
class
buildbot.steps.source.svn.
SVN
¶
The SVN
build step performs a Subversion checkout or update.
There are two basic ways of setting up the checkout step, depending upon whether you are using multiple branches or not.
The SVN
step should be created with the repourl
argument:
repourl
- (required): this specifies the
URL
argument that will be given to the svn checkout command. It dictates both where the repository is located and which sub-tree should be extracted. One way to specify the branch is to useInterpolate
. For example, if you wanted to check out the trunk repository, you could userepourl=Interpolate("http://svn.example.com/repos/%(src::branch)s")
. Alternatively, if you are using a remote Subversion repository which is accessible through HTTP at a URL ofhttp://svn.example.com/repos
, and you wanted to check out thetrunk/calc
sub-tree, you would directly userepourl="http://svn.example.com/repos/trunk/calc"
as an argument to yourSVN
step.
If you are building from multiple branches, then you should create the SVN
step with the repourl
and provide branch information with Interpolate:
from buildbot.plugins import steps, util
factory.addStep(steps.SVN(mode='incremental',
repourl=util.Interpolate('svn://svn.example.org/svn/%(src::branch)s/myproject')))
Alternatively, the repourl
argument can be used to create the SVN
step without Interpolate:
from buildbot.plugins import steps
factory.addStep(steps.SVN(mode='full',
repourl='svn://svn.example.org/svn/myproject/trunk'))
username
- (optional): if specified, this will be passed to the
svn
binary with a--username
option. password
- (optional): if specified, this will be passed to the
svn
binary with a--password
option. extra_args
- (optional): if specified, an array of strings that will be passed as extra arguments to the
svn
binary. keep_on_purge
- (optional): specific files or directories to keep between purges, like some build outputs that can be reused between builds.
depth
(optional): Specify depth argument to achieve sparse checkout. Only available if worker has Subversion 1.5 or higher.
If set to
empty
updates will not pull in any files or subdirectories not already present. If set tofiles
, updates will pull in any files not already present, but not directories. If set toimmediates
, updates will pull in any files or subdirectories not already present, the new subdirectories will have depth: empty. If set toinfinity
, updates will pull in any files or subdirectories not already present; the new subdirectories will have depth-infinity. Infinity is equivalent to SVN default update behavior, without specifying any depth argument.preferLastChangedRev
- (optional): By default, the
got_revision
property is set to the repository’s global revision (“Revision” in the svn info output). Set this parameter toTrue
to have it set to the “Last Changed Rev” instead.
mode
method
SVN’s incremental mode does not require a method. The full mode has five methods defined:
clobber
- It removes the working directory for each build then makes full checkout.
fresh
- This always always purges local changes before updating. This deletes unversioned files and reverts everything that would appear in a svn status --no-ignore. This is equivalent to the old update mode with
always_purge
.clean
- This is same as fresh except that it deletes all unversioned files generated by svn status.
copy
- This first checkout source into source directory then copy the
source
directory tobuild
directory then performs the build operation in the copied directory. This way we make fresh builds with very less bandwidth to download source. The behavior of source checkout follows exactly same as incremental. It performs all the incremental checkout behavior insource
directory.export
- Similar to
method='copy'
, except usingsvn export
to create build directory so that there are no.svn
directories in the build directory.
If you are using branches, you must also make sure your ChangeSource
will report the correct branch names.
CVS¶
-
class
buildbot.steps.source.cvs.
CVS
¶
The CVS
build step performs a CVS checkout or update.
from buildbot.plugins import steps
factory.addStep(steps.CVS(mode='incremental',
cvsroot=':pserver:me@cvs.example.net:/cvsroot/myproj',
cvsmodule='buildbot'))
This step takes the following arguments:
cvsroot
- (required): specify the CVSROOT value, which points to a CVS repository, probably on a remote machine.
For example, if Buildbot was hosted in CVS then the CVSROOT value you would use to get a copy of the Buildbot source code might be
:pserver:anonymous@cvs.example.net:/cvsroot/buildbot
. cvsmodule
- (required): specify the cvs
module
, which is generally a subdirectory of theCVSROOT
. The cvsmodule for the Buildbot source code isbuildbot
. branch
- a string which will be used in a
-r
argument. This is most useful for specifying a branch to work on. Defaults toHEAD
. IfalwaysUseLatest
isTrue
then the branch and revision information that comes with the Build is ignored and branch specified in this parameter is used. global_options
- a list of flags to be put before the argument
checkout
in the CVS command. extra_options
- a list of flags to be put after the
checkout
in the CVS command.
mode
method
No method is needed for incremental mode. For full mode,
method
can take the values shown below. If no value is given, it defaults tofresh
.
clobber
- This specifies to remove the
workdir
and make a full checkout.fresh
- This method first runs
cvsdisard
in the build directory, then updates it. This requirescvsdiscard
which is a part of the cvsutil package.clean
- This method is the same as
method='fresh'
, but it runscvsdiscard --ignore
instead ofcvsdiscard
.copy
- This maintains a
source
directory for source, which it updates copies to the build directory. This allows Buildbot to start with a fresh directory, without downloading the entire repository on every build.
login
- Password to use while performing login to the remote CVS server.
Default is
None
meaning that no login needs to be performed.
Bzr¶
-
class
buildbot.steps.source.bzr.
Bzr
¶
bzr is a descendant of Arch/Baz, and is frequently referred to as simply Bazaar. The repository-vs-workspace model is similar to Darcs, but it uses a strictly linear sequence of revisions (one history per branch) like Arch. Branches are put in subdirectories. This makes it look very much like Mercurial.
from buildbot.plugins import steps
factory.addStep(steps.Bzr(mode='incremental',
repourl='lp:~knielsen/maria/tmp-buildbot-test'))
The step takes the following arguments:
repourl
- (required unless
baseURL
is provided): the URL at which the Bzr source repository is available. baseURL
- (required unless
repourl
is provided): the base repository URL, to which a branch name will be appended. It should probably end in a slash. defaultBranch
- (allowed if and only if
baseURL
is provided): this specifies the name of the branch to use when a Build does not provide one of its own. This will be appended tobaseURL
to create the string that will be passed to thebzr checkout
command. IfalwaysUseLatest
isTrue
then the branch and revision information that comes with the Build is ignored and branch specified in this parameter is used.
mode
method
No method is needed for incremental mode. For full mode,
method
can take the values shown below. If no value is given, it defaults tofresh
.
clobber
- This specifies to remove the
workdir
and make a full checkout.fresh
- This method first runs
bzr clean-tree
to remove all the unversioned files thenupdate
the repo. This remove all unversioned files including those in .bzrignore.clean
- This is same as fresh except that it doesn’t remove the files mentioned in
.bzrginore
i.e, by runningbzr clean-tree --ignore
.copy
- A local bzr repository is maintained and the repo is copied to
build
directory for each build. Before each build the local bzr repo is updated then copied tobuild
for next steps.
P4¶
-
class
buildbot.steps.source.p4.
P4
¶
The P4
build step creates a Perforce client specification and performs an update.
from buildbot.plugins import steps, util
factory.addStep(steps.P4(p4port=p4port,
p4client=util.WithProperties('%(P4USER)s-%(workername)s-%(buildername)s'),
p4user=p4user,
p4base='//depot',
p4viewspec=p4viewspec,
mode='incremental'))
You can specify the client spec in two different ways.
You can use the p4base
, p4branch
, and (optionally) p4extra_views
to build up the viewspec, or you can utilize the p4viewspec
to specify the whole viewspec as a set of tuples.
Using p4viewspec
will allow you to add lines such as:
//depot/branch/mybranch/... //<p4client>/...
-//depot/branch/mybranch/notthisdir/... //<p4client>/notthisdir/...
If you specify p4viewspec
and any of p4base
, p4branch
, and/or p4extra_views
you will receive a configuration error exception.
p4base
- A view into the Perforce depot without branch name or trailing
/...
. Typically//depot/proj
. p4branch
- (optional): A single string, which is appended to the p4base as follows
<p4base>/<p4branch>/...
to form the first line in the viewspec p4extra_views
- (optional): a list of
(depotpath, clientpath)
tuples containing extra views to be mapped into the client specification. Both will have/...
appended automatically. The client name and source directory will be prepended to the client path. p4viewspec
This will override any p4branch, p4base, and/or p4extra_views specified. The viewspec will be an array of tuples as follows:
[('//depot/main/','')]
It yields a viewspec with just:
//depot/main/... //<p4client>/...
p4viewspec_suffix
(optional): The
p4viewspec
lets you customize the client spec for a builder but, as the previous example shows, it automatically adds...
at the end of each line. If you need to also specify file-level remappings, you can set thep4viewspec_suffix
toNone
so that nothing is added to your viewspec:[('//depot/main/...', '...'), ('-//depot/main/config.xml', 'config.xml'), ('//depot/main/config.vancouver.xml', 'config.xml')]
It yields a viewspec with:
//depot/main/... //<p4client>/... -//depot/main/config.xml //<p4client/main/config.xml //depot/main/config.vancouver.xml //<p4client>/main/config.xml
Note how, with
p4viewspec_suffix
set toNone
, you need to manually add...
where you need it.p4client_spec_options
- (optional): By default, clients are created with the
allwrite rmdir
options. This string lets you change that. p4port
- (optional): the
host:port
string describing how to get to the P4 Depot (repository), used as the option -p argument for all p4 commands. p4user
- (optional): the Perforce user, used as the option -u argument to all p4 commands.
p4passwd
- (optional): the Perforce password, used as the option -p argument to all p4 commands.
p4client
- (optional): The name of the client to use.
In
mode='full'
andmode='incremental'
, it’s particularly important that a unique name is used for each checkout directory to avoid incorrect synchronization. For this reason, Python percent substitution will be performed on this value to replace%(prop:workername)s
with the worker name and%(prop:buildername)s
with the builder name. The default isbuildbot_%(prop:workername)s_%(prop:buildername)s
. p4line_end
- (optional): The type of line ending handling P4 should use.
This is added directly to the client spec’s
LineEnd
property. The default islocal
. p4extra_args
(optional): Extra arguments to be added to the P4 command-line for the
sync
command. So for instance if you want to sync only to populate a Perforce proxy (without actually syncing files to disk), you can do:P4(p4extra_args=['-Zproxyload'], ...)
use_tickets
- Set to
True
to use ticket-based authentication, instead of passwords (but you still need to specifyp4passwd
).
Repo¶
-
class
buildbot.steps.source.repo.
Repo
¶
The Repo
build step performs a Repo init and sync.
The Repo step takes the following arguments:
manifestURL
- (required): the URL at which the Repo’s manifests source repository is available.
manifestBranch
- (optional, defaults to
master
): the manifest repository branch on which repo will take its manifest. Corresponds to the-b
argument to the repo init command. manifestFile
- (optional, defaults to
default.xml
): the manifest filename. Corresponds to the-m
argument to the repo init command. tarball
(optional, defaults to
None
): the repo tarball used for fast bootstrap. If not present the tarball will be created automatically after first sync. It is a copy of the.repo
directory which contains all the Git objects. This feature helps to minimize network usage on very big projects with lots of workers.The suffix of the tarball determines if the tarball is compressed and which compressor is chosen. Supported suffixes are
bz2
,gz
,lzma
,lzop
, andpigz
.jobs
- (optional, defaults to
None
): Number of projects to fetch simultaneously while syncing. Passed to repo sync subcommand with “-j”. syncAllBranches
- (optional, defaults to
False
): renderable boolean to control whetherrepo
syncs all branches. I.e.repo sync -c
depth
- (optional, defaults to 0): Depth argument passed to repo init. Specifies the amount of git history to store. A depth of 1 is useful for shallow clones. This can save considerable disk space on very large projects.
updateTarballAge
- (optional, defaults to “one week”): renderable to control the policy of updating of the tarball given properties.
Returns: max age of tarball in seconds, or
None
, if we want to skip tarball update. The default value should be good trade off on size of the tarball, and update frequency compared to cost of tarball creation repoDownloads
(optional, defaults to None): list of
repo download
commands to perform at the end of the Repo step each string in the list will be prefixedrepo download
, and run as is. This means you can include parameter in the string. For example:["-c project 1234/4"]
will cherry-pick patchset 4 of patch 1234 in projectproject
["-f project 1234/4"]
will enforce fast-forward on patchset 4 of patch 1234 in projectproject
-
class
buildbot.steps.source.repo.
RepoDownloadsFromProperties
¶
util.repo.DownloadsFromProperties
can be used as a renderable of the repoDownload
parameter it will look in passed properties for string with following possible format:
repo download project change_number/patchset_number
project change_number/patchset_number
project/change_number/patchset_number
All of these properties will be translated into a repo download. This feature allows integrators to build with several pending interdependent changes, which at the moment cannot be described properly in Gerrit, and can only be described by humans.
-
class
buildbot.steps.source.repo.
RepoDownloadsFromChangeSource
¶
util.repo.DownloadsFromChangeSource
can be used as a renderable of the repoDownload
parameter
This rendereable integrates with GerritChangeSource
, and will automatically use the repo download command of repo to download the additional changes introduced by a pending changeset.
Note
You can use the two above Rendereable in conjunction by using the class buildbot.process.properties.FlattenList
For example:
from buildbot.plugins import steps, util
factory.addStep(steps.Repo(manifestURL='git://gerrit.example.org/manifest.git',
repoDownloads=util.FlattenList([
util.RepoDownloadsFromChangeSource(),
util.RepoDownloadsFromProperties("repo_downloads")
])))
Gerrit¶
-
class
buildbot.steps.source.gerrit.
Gerrit
¶
Gerrit
step is exactly like the Git
step, except that it integrates with GerritChangeSource
, and will automatically checkout the additional changes.
Gerrit integration can be also triggered using forced build with property named gerrit_change
with values in format change_number/patchset_number
.
This property will be translated into a branch name.
This feature allows integrators to build with several pending interdependent changes, which at the moment cannot be described properly in Gerrit, and can only be described by humans.
GitHub¶
-
class
buildbot.steps.source.github.
GitHub
¶
GitHub
step is exactly like the Git
step, except that it will ignore the revision sent by GitHub
change hook, and rather take the branch if the branch ends with /merge.
This allows to test github pull requests merged directly into the mainline.
GitHub indeed provides refs/origin/pull/NNN/merge
on top of refs/origin/pull/NNN/head
which is a magic ref that always create a merge commit to the latest version of the mainline (i.e. the target branch for the pull request).
The revision in the GitHub event points to /head
is important for the GitHub reporter as this is the revision that will be tagged with a CI status when the build is finished.
If you want to use Trigger
to create sub tests and want to have the GitHub reporter still update the original revision, make sure you set updateSourceStamp=False
in the Trigger
configuration.
GitLab¶
-
class
buildbot.steps.source.gitlab.
GitLab
¶
GitLab
step is exactly like the Git
step, except that it uses the source repo and branch sent by the GitLab
change hook when processing merge requests.
When configuring builders, you can use a ChangeFilter with category = "push"
to select normal commits, and category = "merge_request"
to select merge requests.
See master/docs/examples/gitlab.cfg
in the Buildbot distribution
for a tutorial example of integrating Buildbot with GitLab.
Note
Your build worker will need access to the source project of the changeset, or it won’t be able to check out the source. This means authenticating the build worker via ssh credentials in the usual way, then granting it access [via a GitLab deploy key or GitLab project membership](https://docs.gitlab.com/ee/ssh/). This needs to be done not only for the main git repo, but also for each fork that wants to be able to submit merge requests against the main repo.
Darcs¶
-
class
buildbot.steps.source.darcs.
Darcs
¶
The Darcs
build step performs a Darcs checkout or update.
from buildbot.plugins import steps
factory.addStep(steps.Darcs(repourl='http://path/to/repo',
mode='full', method='clobber', retry=(10, 1)))
Darcs step takes the following arguments:
repourl
- (required): The URL at which the Darcs source repository is available.
mode
(optional): defaults to
'incremental'
. Specifies whether to clean the build tree or not.
incremental
- The source is update, but any built files are left untouched.
full
- The build tree is clean of any built files. The exact method for doing this is controlled by the
method
argument.
method
(optional): defaults to
copy
when mode isfull
. Darcs’ incremental mode does not require a method. The full mode has two methods defined:clobber
- It removes the working directory for each build then makes full checkout.
copy
- This first checkout source into source directory then copy the
source
directory tobuild
directory then performs the build operation in the copied directory. This way we make fresh builds with very less bandwidth to download source. The behavior of source checkout follows exactly same as incremental. It performs all the incremental checkout behavior insource
directory.
Monotone¶
-
class
buildbot.steps.source.mtn.
Monotone
¶
The Monotone
build step performs a Monotone checkout or update.
from buildbot.plugins import steps
factory.addStep(steps.Monotone(repourl='http://path/to/repo',
mode='full', method='clobber',
branch='some.branch.name', retry=(10, 1)))
Monotone step takes the following arguments:
repourl
- the URL at which the Monotone source repository is available.
branch
- this specifies the name of the branch to use when a Build does not provide one of its own.
If
alwaysUseLatest
isTrue
then the branch and revision information that comes with the Build is ignored and branch specified in this parameter is used. progress
- this is a boolean that has a pull from the repository use
--ticker=dot
instead of the default--ticker=none
.
mode
(optional): defaults to
'incremental'
. Specifies whether to clean the build tree or not. In any case, the worker first pulls from the given remote repository to synchronize (or possibly initialize) its local database. The mode and method only affect how the build tree is checked-out or updated from the local database.
incremental
- The source is update, but any built files are left untouched.
full
- The build tree is clean of any built files. The exact method for doing this is controlled by the
method
argument. Even in this mode, the revisions already pulled remain in the database and a fresh pull is rarely needed.
method
(optional): defaults to
copy
when mode isfull
. Monotone’s incremental mode does not require a method. The full mode has four methods defined:
clobber
- It removes the build directory entirely then makes fresh checkout from the database.
clean
- This remove all other files except those tracked and ignored by Monotone. It will remove all the files that appear in mtn ls unknown. Then it will pull from remote and update the working directory.
fresh
- This remove all other files except those tracked by Monotone. It will remove all the files that appear in mtn ls ignored and mtn ls unknows. Then pull and update similar to
clean
copy
- This first checkout source into source directory then copy the
source
directory tobuild
directory then performs the build operation in the copied directory. This way we make fresh builds with very less bandwidth to download source. The behavior of source checkout follows exactly same as incremental. It performs all the incremental checkout behavior insource
directory.
2.5.10.3. Other Source operations¶
Currently the only non-checkout step that is related to version control is GitPush
.
GitPush¶
-
class
buildbot.steps.source.git.
GitPush
¶
The GitPush
build step pushes new commits to a Git repository.
The GitPush step takes the following arguments:
workdir
- (required) The path to the local repository to push commits from.
repourl
- (required) The URL of the upstream Git repository.
branch
- (required) The branch to push. The branch should already exist on the local repository.
force
- (optional) If
True
, forces overwrite of refs on the remote repository. Corresponds to the--force
flag of thegit push
command. logEnviron
- (optional) If this option is true (the default), then the step’s logfile will describe the environment variables on the worker.
In situations where the environment is not relevant and is long, it may be easier to set
logEnviron=False
. env
- (optional) A dictionary of environment strings which will be added to the child command’s environment. The usual property interpolations can be used in environment variable names and values - see Properties.
timeout
- (optional) Specifies the timeout for worker-side operations, in seconds. If your repositories are particularly large, then you may need to increase this value from its default of 1200 (20 minutes).
config
(optional) A dict of git configuration settings to pass to the remote git commands.
sshPrivateKey
(optional) The private key to use when running git for fetch operations. The ssh utility must be in the system path in order to use this option. On Windows only git distribution that embeds MINGW has been tested (as of July 2017 the official distribution is MINGW-based). The worker must either have the host in the known hosts file or the host key must be specified via thesshHostKey
option.
sshHostKey
(optional) Specifies public host key to match when authenticating with SSH public key authentication. This may be either a Secret or just a string.sshPrivateKey
must be specified in order to use this option. The host key must be in the form of<key type> <base64-encoded string>
, e.g.ssh-rsa AAAAB3N<...>FAaQ==
.
2.5.10.4. ShellCommand¶
Most interesting steps involve executing a process of some sort on the worker.
The ShellCommand
class handles this activity.
Several subclasses of ShellCommand
are provided as starting points for common build steps.
Using ShellCommands¶
-
class
buildbot.steps.shell.
ShellCommand
¶
This is a useful base class for just about everything you might want to do during a build (except for the initial source checkout).
It runs a single command in a child shell on the worker.
All stdout/stderr is recorded into a LogFile
.
The step usually finishes with a status of FAILURE
if the command’s exit code is non-zero, otherwise it has a status of SUCCESS
.
The preferred way to specify the command is with a list of argv strings, since this allows for spaces in filenames and avoids doing any fragile shell-escaping.
You can also specify the command with a single string, in which case the string is given to /bin/sh -c COMMAND
for parsing.
On Windows, commands are run via cmd.exe /c
which works well.
However, if you’re running a batch file, the error level does not get propagated correctly unless you add ‘call’ before your batch file’s name: cmd=['call', 'myfile.bat', ...]
.
The ShellCommand
arguments are:
command
a list of strings (preferred) or single string (discouraged) which specifies the command to be run. A list of strings is preferred because it can be used directly as an argv array. Using a single string (with embedded spaces) requires the worker to pass the string to /bin/sh for interpretation, which raises all sorts of difficult questions about how to escape or interpret shell metacharacters.
If
command
contains nested lists (for example, from a properties substitution), then that list will be flattened before it is executed.workdir
All ShellCommands are run by default in the
workdir
, which defaults to thebuild
subdirectory of the worker builder’s base directory. The absolute path of the workdir will thus be the worker’s basedir (set as an option tobuildbot-worker create-worker
, Creating a worker) plus the builder’s basedir (set in the builder’sbuilddir
key inmaster.cfg
) plus the workdir itself (a class-level attribute of the BuildFactory, defaults tobuild
).For example:
from buildbot.plugins import steps f.addStep(steps.ShellCommand(command=["make", "test"], workdir="build/tests"))
env
a dictionary of environment strings which will be added to the child command’s environment. For example, to run tests with a different i18n language setting, you might use:
from buildbot.plugins import steps f.addStep(steps.ShellCommand(command=["make", "test"], env={'LANG': 'fr_FR'}))
These variable settings will override any existing ones in the worker’s environment or the environment specified in the
Builder
. The exception isPYTHONPATH
, which is merged with (actually prepended to) any existingPYTHONPATH
setting. The following example will prepend/home/buildbot/lib/python
to any existingPYTHONPATH
:from buildbot.plugins import steps f.addStep(steps.ShellCommand( command=["make", "test"], env={'PYTHONPATH': "/home/buildbot/lib/python"}))
To avoid the need of concatenating path together in the master config file, if the value is a list, it will be joined together using the right platform dependent separator.
Those variables support expansion so that if you just want to prepend
/home/buildbot/bin
to thePATH
environment variable, you can do it by putting the value${PATH}
at the end of the value like in the example below. Variables that don’t exist on the worker will be replaced by""
.from buildbot.plugins import steps f.addStep(steps.ShellCommand( command=["make", "test"], env={'PATH': ["/home/buildbot/bin", "${PATH}"]}))
Note that environment values must be strings (or lists that are turned into strings). In particular, numeric properties such as
buildnumber
must be substituted using Interpolate.want_stdout
- if
False
, stdout from the child process is discarded rather than being sent to the buildmaster for inclusion in the step’sLogFile
. want_stderr
- like
want_stdout
but forstderr
. Note that commands run through a PTY do not have separatestdout
/stderr
streams: both are merged intostdout
. usePTY
Should this command be run in a
pty
?False
by default. This option is not available on Windows.In general, you do not want to use a pseudo-terminal. This is only useful for running commands that require a terminal - for example, testing a command-line application that will only accept passwords read from a terminal. Using a pseudo-terminal brings lots of compatibility problems, and prevents Buildbot from distinguishing the standard error (red) and standard output (black) streams.
In previous versions, the advantage of using a pseudo-terminal was that
grandchild
processes were more likely to be cleaned up if the build was interrupted or times out. This occurred because using a pseudo-terminal incidentally puts the command into its own process group.As of Buildbot-0.8.4, all commands are placed in process groups, and thus grandchild processes will be cleaned up properly.
logfiles
Sometimes commands will log interesting data to a local file, rather than emitting everything to stdout or stderr. For example, Twisted’s trial command (which runs unit tests) only presents summary information to stdout, and puts the rest into a file named
_trial_temp/test.log
. It is often useful to watch these files as the command runs, rather than using /bin/cat to dump their contents afterwards.The
logfiles=
argument allows you to collect data from these secondary logfiles in near-real-time, as the step is running. It accepts a dictionary which maps from a local Log name (which is how the log data is presented in the build results) to either a remote filename (interpreted relative to the build’s working directory), or a dictionary of options. Each named file will be polled on a regular basis (every couple of seconds) as the build runs, and any new text will be sent over to the buildmaster.If you provide a dictionary of options instead of a string, you must specify the
filename
key. You can optionally provide afollow
key which is a boolean controlling whether a logfile is followed or concatenated in its entirety. Following is appropriate for logfiles to which the build step will append, where the pre-existing contents are not interesting. The default value forfollow
isFalse
, which gives the same behavior as just providing a string filename.from buildbot.plugins import steps f.addStep(steps.ShellCommand( command=["make", "test"], logfiles={"triallog": "_trial_temp/test.log"}))
The above example will add a log named ‘triallog’ on the master, based on
_trial_temp/test.log
on the worker.from buildbot.plugins import steps f.addStep(steps.ShellCommand(command=["make", "test"], logfiles={ "triallog": { "filename": "_trial_temp/test.log", "follow": True } }))
lazylogfiles
- If set to
True
, logfiles will be tracked lazily, meaning that they will only be added when and if something is written to them. This can be used to suppress the display of empty or missing log files. The default isFalse
. timeout
- if the command fails to produce any output for this many seconds, it is assumed to be locked up and will be killed.
This defaults to 1200 seconds.
Pass
None
to disable. maxTime
- if the command takes longer than this many seconds, it will be killed. This is disabled by default.
logEnviron
- If this option is
True
(the default), then the step’s logfile will describe the environment variables on the worker. In situations where the environment is not relevant and is long, it may be easier to setlogEnviron=False
. interruptSignal
- If the command should be interrupted (either by buildmaster or timeout etc.), what signal should be sent to the process, specified by name. By default this is “KILL” (9). Specify “TERM” (15) to give the process a chance to cleanup. This functionality requires a 0.8.6 worker or newer.
sigtermTime
If set, when interrupting, try to kill the command with SIGTERM and wait for sigtermTime seconds before firinginteruptSignal
. If None,interruptSignal
will be fired immediately on interrupt.
initialStdin
- If the command expects input on stdin, that can be supplied a a string with this parameter. This value should not be excessively large, as it is handled as a single string throughout Buildbot – for example, do not pass the contents of a tarball with this parameter.
decodeRC
- This is a dictionary that decodes exit codes into results value.
For example,
{0:SUCCESS,1:FAILURE,2:WARNINGS}
, will treat the exit code2
as WARNINGS. The default is to treat just 0 as successful. ({0:SUCCESS}
) any exit code not present in the dictionary will be treated asFAILURE
Shell Sequence¶
Some steps have a specific purpose, but require multiple shell commands to implement them.
For example, a build is often configure; make; make install
.
We have two ways to handle that:
- Create one shell command with all these.
To put the logs of each commands in separate logfiles, we need to re-write the script as
configure 1> configure_log; ...
and to add theseconfigure_log
files aslogfiles
argument of the buildstep. This has the drawback of complicating the shell script, and making it harder to maintain as the logfile name is put in different places. - Create three
ShellCommand
instances, but this loads the build UI unnecessarily.
ShellSequence
is a class to execute not one but a sequence of shell commands during a build.
It takes as argument a renderable, or list of commands which are ShellArg
objects.
Each such object represents a shell invocation.
The single ShellSequence
argument aside from the common parameters is:
commands
A list of ShellArg
objects or a renderable the returns a list of ShellArg
objects.
from buildbot.plugins import steps, util
f.addStep(steps.ShellSequence(
commands=[
util.ShellArg(command=['configure']),
util.ShellArg(command=['make'], logfile='make'),
util.ShellArg(command=['make', 'check_warning'], logfile='warning', warnOnFailure=True),
util.ShellArg(command=['make', 'install'], logfile='make install')
]))
All these commands share the same configuration of environment
, workdir
and pty
usage that can be setup the same way as in ShellCommand
.
-
class
buildbot.steps.shellsequence.
ShellArg
(self, command=None, logfile=None, haltOnFailure=False, flunkOnWarnings=False, flunkOnFailure=False, warnOnWarnings=False, warnOnFailure=False)¶ Parameters: - command – (see the
ShellCommand
command
argument), - logfile – optional log file name, used as the stdio log of the command
The
haltOnFailure
,flunkOnWarnings
,flunkOnFailure
,warnOnWarnings
,warnOnFailure
parameters drive the execution of the sequence, the same way steps are scheduled in the build. They have the same default values as for buildsteps - see Common Parameters.Any of the arguments to this class can be renderable.
Note that if
logfile
name does not start with the prefixstdio
, that prefix will be set likestdio <logfile>
.- command – (see the
The two ShellSequence
methods below tune the behavior of how the list of shell commands are executed, and can be overridden in subclasses.
-
class
buildbot.steps.shellsequence.
ShellSequence
¶ -
shouldRunTheCommand
(oneCmd)¶ Parameters: oneCmd – a string or a list of strings, as rendered from a ShellArg
instance’scommand
argument.Determine whether the command
oneCmd
should be executed. IfshouldRunTheCommand
returnsFalse
, the result of the command will be recorded as SKIPPED. The default methods skips all empty strings and empty lists.
-
getFinalState
()¶ Return the status text of the step in the end. The default value is to set the text describing the execution of the last shell command.
-
runShellSequence(commands):
Parameters: commands – list of shell args This method actually runs the shell sequence. The default
run
method callsrunShellSequence
, but subclasses can overriderun
to perform other operations, if desired.
-
Configure¶
-
class
buildbot.steps.shell.
Configure
¶
This is intended to handle the ./configure step from autoconf-style projects, or the perl Makefile.PL
step from perl MakeMaker.pm
-style modules.
The default command is ./configure but you can change this by providing a command=
parameter.
The arguments are identical to ShellCommand
.
from buildbot.plugins import steps
f.addStep(steps.Configure())
CMake¶
-
class
buildbot.steps.cmake.
CMake
¶
This is intended to handle the cmake step for projects that use CMake-based build systems.
Note
Links below point to the latest CMake documentation. Make sure that you check the documentation for the CMake you use.
In addition to the parameters ShellCommand
supports, this step accepts the following parameters:
path
- Either a path to a source directory to (re-)generate a build system for it in the current working directory. Or an existing build directory to re-generate its build system.
generator
- A build system generator. See cmake-generators(7) for available options.
definitions
- A dictionary that contains parameters that will be converted to
-D{name}={value}
when passed to CMake. A renderable which renders to a dictionary can also be provided, see Properties. Refer to cmake(1) for more information. options
- A list or a tuple that contains options that will be passed to CMake as is. A renderable which renders to a tuple or list can also be provided, see Properties. Refer to cmake(1) for more information.
cmake
- Path to the CMake binary. Default is cmake
from buildbot.plugins import steps
...
factory.addStep(
steps.CMake(
generator='Ninja',
definitions={
'CMAKE_BUILD_TYPE': Property('BUILD_TYPE')
},
options=[
'-Wno-dev'
]
)
)
...
Compile¶
This is meant to handle compiling or building a project written in C.
The default command is make all
.
When the compilation is finished, the log file is scanned for GCC warning messages, a summary log is created with any problems that were seen, and the step is marked as WARNINGS if any were discovered.
Through the WarningCountingShellCommand
superclass, the number of warnings is stored in a Build Property named warnings-count, which is accumulated over all Compile
steps (so if two warnings are found in one step, and three are found in another step, the overall build will have a warnings-count property of 5).
Each step can be optionally given a maximum number of warnings via the maxWarnCount parameter.
If this limit is exceeded, the step will be marked as a failure.
The default regular expression used to detect a warning is '.*warning[: ].*'
, which is fairly liberal and may cause false-positives.
To use a different regexp, provide a warningPattern=
argument, or use a subclass which sets the warningPattern
attribute:
from buildbot.plugins import steps
f.addStep(steps.Compile(command=["make", "test"],
warningPattern="^Warning: "))
The warningPattern=
can also be a pre-compiled Python regexp object: this makes it possible to add flags like re.I
(to use case-insensitive matching).
Note that the compiled warningPattern
will have its match
method called, which is subtly different from a search
.
Your regular expression must match the from the beginning of the line.
This means that to look for the word “warning” in the middle of a line, you will need to prepend '.*'
to your regular expression.
The suppressionFile=
argument can be specified as the (relative) path of a file inside the workdir defining warnings to be suppressed from the warning counting and log file.
The file will be uploaded to the master from the worker before compiling, and any warning matched by a line in the suppression file will be ignored.
This is useful to accept certain warnings (e.g. in some special module of the source tree or in cases where the compiler is being particularly stupid), yet still be able to easily detect and fix the introduction of new warnings.
The file must contain one line per pattern of warnings to ignore.
Empty lines and lines beginning with #
are ignored.
Other lines must consist of a regexp matching the file name, followed by a colon (:
), followed by a regexp matching the text of the warning.
Optionally this may be followed by another colon and a line number range.
For example:
# Sample warning suppression file
mi_packrec.c : .*result of 32-bit shift implicitly converted to 64 bits.* : 560-600
DictTabInfo.cpp : .*invalid access to non-static.*
kernel_types.h : .*only defines private constructors and has no friends.* : 51
If no line number range is specified, the pattern matches the whole file; if only one number is given it matches only on that line.
The suppressionList=
argument can be specified as a list of four-tuples as addition or instead of suppressionFile=
.
The tuple should be [ FILE-RE, WARNING-RE, START, END ]
.
If FILE-RE
is None
, then the suppression applies to any file.
START
and END
can be specified as in suppression file, or None
.
The default warningPattern regexp only matches the warning text, so line numbers and file names are ignored.
To enable line number and file name matching, provide a different regexp and provide a function (callable) as the argument of warningExtractor=
.
The function is called with three arguments: the BuildStep
object, the line in the log file with the warning, and the SRE_Match
object of the regexp search for warningPattern
.
It should return a tuple (filename, linenumber, warning_test)
.
For example:
f.addStep(Compile(command=["make"],
warningPattern="^(.\*?):([0-9]+): [Ww]arning: (.\*)$",
warningExtractor=Compile.warnExtractFromRegexpGroups,
suppressionFile="support-files/compiler_warnings.supp"))
(Compile.warnExtractFromRegexpGroups
is a pre-defined function that returns the filename, linenumber, and text from groups (1,2,3) of the regexp match).
In projects with source files in multiple directories, it is possible to get full path names for file names matched in the suppression file, as long as the build command outputs the names of directories as they are entered into and left again.
For this, specify regexps for the arguments directoryEnterPattern=
and directoryLeavePattern=
.
The directoryEnterPattern=
regexp should return the name of the directory entered into in the first matched group.
The defaults, which are suitable for GNU Make, are these:
directoryEnterPattern="make.*: Entering directory [\"`'](.*)['`\"]"
directoryLeavePattern="make.*: Leaving directory"
(TODO: this step needs to be extended to look for GCC error messages as well, and collect them into a separate logfile, along with the source code filenames involved).
Visual C++¶
These steps are meant to handle compilation using Microsoft compilers.
VC++ 6-141 (aka Visual Studio 2003-2015 and VCExpress9) are supported via calling devenv
.
Msbuild as well as Windows Driver Kit 8 are supported via the MsBuild4
, MsBuild12
, MsBuild14
and MsBuild141
steps.
These steps will take care of setting up a clean compilation environment, parsing the generated output in real time, and delivering as detailed as possible information about the compilation executed.
All of the classes are in buildbot.steps.vstudio
.
The available classes are:
VC6
VC7
VC8
VC9
VC10
VC11
VC12
VC14
VC141
VS2003
VS2005
VS2008
VS2010
VS2012
VS2013
VS2015
VS2017
VCExpress9
MsBuild4
MsBuild12
MsBuild14
MsBuild141
The available constructor arguments are
mode
- The mode default to
rebuild
, which means that first all the remaining object files will be cleaned by the compiler. The alternate values arebuild
, where only the updated files will be recompiled, andclean
, where the current build files are removed and no compilation occurs. projectfile
- This is a mandatory argument which specifies the project file to be used during the compilation.
config
- This argument defaults to
release
an gives to the compiler the configuration to use. installdir
- This is the place where the compiler is installed. The default value is compiler specific and is the default place where the compiler is installed.
useenv
- This boolean parameter, defaulting to
False
instruct the compiler to use its own settings or the one defined through the environment variablesPATH
,INCLUDE
, andLIB
. If any of theINCLUDE
orLIB
parameter is defined, this parameter automatically switches toTrue
. PATH
- This is a list of path to be added to the
PATH
environment variable. The default value is the one defined in the compiler options. INCLUDE
- This is a list of path where the compiler will first look for include files. Then comes the default paths defined in the compiler options.
LIB
- This is a list of path where the compiler will first look for libraries. Then comes the default path defined in the compiler options.
arch
- That one is only available with the class VS2005 (VC8).
It gives the target architecture of the built artifact.
It defaults to
x86
and does not apply toMsBuild4
orMsBuild12
. Please seeplatform
below. project
- This gives the specific project to build from within a workspace. It defaults to building all projects. This is useful for building cmake generate projects.
platform
- This is a mandatory argument for
MsBuild4
andMsBuild12
specifying the target platform such as ‘Win32’, ‘x64’ or ‘Vista Debug’. The last one is an example of driver targets that appear once Windows Driver Kit 8 is installed.
Here is an example on how to drive compilation with Visual Studio 2013:
from buildbot.plugins import steps
f.addStep(
steps.VS2013(projectfile="project.sln", config="release",
arch="x64", mode="build",
INCLUDE=[r'C:\3rd-party\libmagic\include'],
LIB=[r'C:\3rd-party\libmagic\lib-x64']))
Here is a similar example using “MsBuild12”:
from buildbot.plugins import steps
# Build one project in Release mode for Win32
f.addStep(
steps.MsBuild12(projectfile="trunk.sln", config="Release", platform="Win32",
workdir="trunk",
project="tools\\protoc"))
# Build the entire solution in Debug mode for x64
f.addStep(
steps.MsBuild12(projectfile="trunk.sln", config='Debug', platform='x64',
workdir="trunk"))
Cppcheck¶
This step runs cppcheck
, analyse its output, and set the outcome in Properties.
from buildbot.plugins import steps
f.addStep(steps.Cppcheck(enable=['all'], inconclusive=True]))
This class adds the following arguments:
binary
- (Optional, default to
cppcheck
) Use this if you need to give the full path to the cppcheck binary or if your binary is called differently. source
- (Optional, default to
['.']
) This is the list of paths for the sources to be checked by this step. enable
- (Optional) Use this to give a list of the message classes that should be in cppcheck report. See the cppcheck man page for more information.
inconclusive
- (Optional)
Set this to
True
if you want cppcheck to also report inconclusive results. See the cppcheck man page for more information. extra_args
- (Optional) This is the list of extra arguments to be given to the cppcheck command.
All other arguments are identical to ShellCommand
.
Robocopy¶
-
class
buildbot.steps.mswin.
Robocopy
¶
This step runs robocopy
on Windows.
Robocopy is available in versions of Windows starting with Windows Vista and Windows Server 2008. For previous versions of Windows, it’s available as part of the Windows Server 2003 Resource Kit Tools.
from buildbot.plugins import steps, util
f.addStep(
steps.Robocopy(
name='deploy_binaries',
description='Deploying binaries...',
descriptionDone='Deployed binaries.',
source=util.Interpolate('Build\\Bin\\%(prop:configuration)s'),
destination=util.Interpolate('%(prop:deploy_dir)\\Bin\\%(prop:configuration)s'),
mirror=True
)
)
Available constructor arguments are:
source
- The path to the source directory (mandatory).
destination
- The path to the destination directory (mandatory).
files
- An array of file names or patterns to copy.
recursive
- Copy files and directories recursively (
/E
parameter). mirror
- Mirror the source directory in the destination directory, including removing files that don’t exist anymore (
/MIR
parameter). move
- Delete the source directory after the copy is complete (
/MOVE
parameter). exclude_files
- An array of file names or patterns to exclude from the copy (
/XF
parameter). exclude_dirs
- An array of directory names or patterns to exclude from the copy (
/XD
parameter). custom_opts
- An array of custom parameters to pass directly to the
robocopy
command. verbose
- Whether to output verbose information (
/V /TS /FP
parameters).
Note that parameters /TEE /NP
will always be appended to the command to signify, respectively, to output logging to the console, use Unicode logging, and not print any percentage progress information for each file.
Test¶
from buildbot.plugins import steps
f.addStep(steps.Test())
This is meant to handle unit tests.
The default command is make test, and the warnOnFailure
flag is set.
The other arguments are identical to ShellCommand
.
TreeSize¶
from buildbot.plugins import steps
f.addStep(steps.TreeSize())
This is a simple command that uses the du tool to measure the size of the code tree.
It puts the size (as a count of 1024-byte blocks, aka ‘KiB’ or ‘kibibytes’) on the step’s status text, and sets a build property named tree-size-KiB
with the same value.
All arguments are identical to ShellCommand
.
PerlModuleTest¶
from buildbot.plugins import steps
f.addStep(steps.PerlModuleTest())
This is a simple command that knows how to run tests of perl modules.
It parses the output to determine the number of tests passed and failed and total number executed, saving the results for later query.
The command is prove --lib lib -r t
, although this can be overridden with the command
argument.
All other arguments are identical to those for ShellCommand
.
MTR (mysql-test-run)¶
The MTR
class is a subclass of Test
.
It is used to run test suites using the mysql-test-run program, as used in MySQL, Drizzle, MariaDB, and MySQL storage engine plugins.
The shell command to run the test suite is specified in the same way as for the Test
class.
The MTR
class will parse the output of running the test suite, and use the count of tests executed so far to provide more accurate completion time estimates.
Any test failures that occur during the test are summarized on the Waterfall Display.
Server error logs are added as additional log files, useful to debug test failures.
Optionally, data about the test run and any test failures can be inserted into a database for further analysis and report generation.
To use this facility, create an instance of twisted.enterprise.adbapi.ConnectionPool
with connections to the database.
The necessary tables can be created automatically by setting autoCreateTables
to True
, or manually using the SQL found in the mtrlogobserver.py source file.
One problem with specifying a database is that each reload of the configuration will get a new instance of ConnectionPool
(even if the connection parameters are the same).
To avoid that Buildbot thinks the builder configuration has changed because of this, use the steps.mtrlogobserver.EqConnectionPool
subclass of ConnectionPool
, which implements an equality operation that avoids this problem.
Example use:
from buildbot.plugins import steps, util
myPool = util.EqConnectionPool("MySQLdb", "host", "buildbot", "password", "db")
myFactory.addStep(steps.MTR(workdir="mysql-test", dbpool=myPool,
command=["perl", "mysql-test-run.pl", "--force"]))
The MTR
step’s arguments are:
textLimit
- Maximum number of test failures to show on the waterfall page (to not flood the page in case of a large number of test failures. Defaults to 5.
testNameLimit
- Maximum length of test names to show unabbreviated in the waterfall page, to avoid excessive column width. Defaults to 16.
parallel
- Value of option –parallel option used for
mysql-test-run.pl
(number of processes used to run the test suite in parallel). Defaults to 4. This is used to determine the number of server error log files to download from the worker. Specifying a too high value does not hurt (as nonexistent error logs will be ignored), however if using option –parallel value greater than the default it needs to be specified, or some server error logs will be missing. dbpool
- An instance of
twisted.enterprise.adbapi.ConnectionPool
, orNone
. Defaults toNone
. If specified, results are inserted into the database using theConnectionPool
. autoCreateTables
- Boolean, defaults to
False
. IfTrue
(anddbpool
is specified), the necessary database tables will be created automatically if they do not exist already. Alternatively, the tables can be created manually from the SQL statements found in the mtrlogobserver.py source file. test_type
- Short string that will be inserted into the database in the row for the test run. Defaults to the empty string, but can be specified to identify different types of test runs.
test_info
- Descriptive string that will be inserted into the database in the row for the test run. Defaults to the empty string, but can be specified as a user-readable description of this particular test run.
mtr_subdir
- The subdirectory in which to look for server error log files.
Defaults to
mysql-test
, which is usually correct. Interpolate is supported.
SubunitShellCommand¶
-
class
buildbot.steps.subunit.
SubunitShellCommand
¶
This buildstep is similar to ShellCommand
, except that it runs the log content through a subunit filter to extract test and failure counts.
from buildbot.plugins import steps
f.addStep(steps.SubunitShellCommand(command="make test"))
This runs make test
and filters it through subunit.
The ‘tests’ and ‘test failed’ progress metrics will now accumulate test data from the test run.
If failureOnNoTests
is True
, this step will fail if no test is run.
By default failureOnNoTests
is False.
2.5.10.5. Worker Filesystem Steps¶
Here are some buildsteps for manipulating the worker’s filesystem.
FileExists¶
This step will assert that a given file exists, failing if it does not. The filename can be specified with a property.
from buildbot.plugins import steps
f.addStep(steps.FileExists(file='test_data'))
This step requires worker version 0.8.4 or later.
CopyDirectory¶
This command copies a directory on the worker.
from buildbot.plugins import steps
f.addStep(steps.CopyDirectory(src="build/data", dest="tmp/data"))
This step requires worker version 0.8.5 or later.
The CopyDirectory step takes the following arguments:
timeout
- if the copy command fails to produce any output for this many seconds, it is assumed to be locked up and will be killed.
This defaults to 120 seconds.
Pass
None
to disable. maxTime
- if the command takes longer than this many seconds, it will be killed. This is disabled by default.
RemoveDirectory¶
This command recursively deletes a directory on the worker.
from buildbot.plugins import steps
f.addStep(steps.RemoveDirectory(dir="build/build"))
This step requires worker version 0.8.4 or later.
MakeDirectory¶
This command creates a directory on the worker.
from buildbot.plugins import steps
f.addStep(steps.MakeDirectory(dir="build/build"))
This step requires worker version 0.8.5 or later.
2.5.10.6. Python BuildSteps¶
Here are some BuildStep
s that are specifically useful for projects
implemented in Python.
BuildEPYDoc¶
-
class
buildbot.steps.python.
BuildEPYDoc
¶
epydoc is a tool for generating
API documentation for Python modules from their docstrings.
It reads all the .py
files from your source tree, processes the docstrings therein, and creates a large tree of .html
files (or a single .pdf
file).
The BuildEPYDoc
step will run epydoc to produce this API documentation, and will count the errors and warnings from its output.
You must supply the command line to be used.
The default is make epydocs
, which assumes that your project has a Makefile
with an epydocs target.
You might wish to use something like epydoc -o apiref source/PKGNAME
instead.
You might also want to add option –pdf to generate a PDF file instead of a large tree of HTML files.
The API docs are generated in-place in the build tree (under the workdir, in the subdirectory controlled by the option -o argument).
To make them useful, you will probably have to copy them to somewhere they can be read.
For example if you have server with configured nginx web server, you can place generated docs to it’s public folder with command like rsync -ad apiref/ dev.example.com:~/usr/share/nginx/www/current-apiref/
.
You might instead want to bundle them into a tarball and publish it in the same place where the generated install tarball is placed.
from buildbot.plugins import steps
f.addStep(steps.BuildEPYDoc(command=["epydoc", "-o", "apiref", "source/mypkg"]))
PyFlakes¶
-
class
buildbot.steps.python.
PyFlakes
¶
PyFlakes is a tool to perform basic static analysis of Python code to look for simple errors, like missing imports and references of undefined names. It is like a fast and simple form of the C lint program. Other tools (like pychecker) provide more detailed results but take longer to run.
The PyFlakes
step will run pyflakes and count the various kinds of errors and warnings it detects.
You must supply the command line to be used.
The default is make pyflakes
, which assumes you have a top-level Makefile
with a pyflakes
target.
You might want to use something like pyflakes .
or pyflakes src
.
from buildbot.plugins import steps
f.addStep(steps.PyFlakes(command=["pyflakes", "src"]))
Sphinx¶
-
class
buildbot.steps.python.
Sphinx
¶
Sphinx is the Python Documentation Generator. It uses RestructuredText as input format.
The Sphinx
step will run sphinx-build or any other program specified in its sphinx
argument and count the various warnings and error it detects.
from buildbot.plugins import steps
f.addStep(steps.Sphinx(sphinx_builddir="_build"))
This step takes the following arguments:
sphinx_builddir
- (required) Name of the directory where the documentation will be generated.
sphinx_sourcedir
- (optional, defaulting to
.
), Name the directory where theconf.py
file will be found sphinx_builder
- (optional) Indicates the builder to use.
sphinx
- (optional, defaulting to sphinx-build) Indicates the executable to run.
tags
- (optional) List of
tags
to pass to sphinx-build defines
- (optional) Dictionary of defines to overwrite values of the
conf.py
file. mode
- (optional) String, one of
full
orincremental
(the default). If set tofull
, indicates to Sphinx to rebuild everything without re-using the previous build results.
PyLint¶
Similarly, the PyLint
step will run pylint and analyze the results.
You must supply the command line to be used. There is no default.
from buildbot.plugins import steps
f.addStep(steps.PyLint(command=["pylint", "src"]))
Trial¶
-
class
buildbot.steps.python_twisted.
Trial
¶
This step runs a unit test suite using trial, a unittest-like testing framework that is a component of Twisted Python. Trial is used to implement Twisted’s own unit tests, and is the unittest-framework of choice for many projects that use Twisted internally.
Projects that use trial typically have all their test cases in a ‘test’ subdirectory of their top-level library directory.
For example, for a package petmail
, the tests might be in petmail/test/test_*.py
.
More complicated packages (like Twisted itself) may have multiple test directories, like twisted/test/test_*.py
for the core functionality and twisted/mail/test/test_*.py
for the email-specific tests.
To run trial tests manually, you run the trial executable and tell it where the test cases are located.
The most common way of doing this is with a module name.
For petmail, this might look like trial petmail.test, which would locate all the test_*.py
files under petmail/test/
, running every test case it could find in them.
Unlike the unittest.py
that comes with Python, it is not necessary to run the test_foo.py
as a script; you always let trial do the importing and running.
The step’s tests`
parameter controls which tests trial will run: it can be a string or a list of strings.
To find the test cases, the Python search path must allow something like import petmail.test
to work.
For packages that don’t use a separate top-level lib
directory, PYTHONPATH=.
will work, and will use the test cases (and the code they are testing) in-place.
PYTHONPATH=build/lib
or PYTHONPATH=build/lib.somearch
are also useful when you do a python setup.py build
step first.
The testpath
attribute of this class controls what PYTHONPATH
is set to before running trial.
Trial has the ability, through the --testmodule
flag, to run only the set of test cases named by special test-case-name
tags in source files.
We can get the list of changed source files from our parent Build and provide them to trial, thus running the minimal set of test cases needed to cover the Changes.
This is useful for quick builds, especially in trees with a lot of test cases.
The testChanges
parameter controls this feature: if set, it will override tests
.
The trial executable itself is typically just trial, and is typically found in the shell search path.
It can be overridden with the trial
parameter.
This is useful for Twisted’s own unittests, which want to use the copy of bin/trial that comes with the sources.
To influence the version of Python being used for the tests, or to add flags to the command, set the python
parameter.
This can be a string (like python2.2
) or a list (like ['python2.3', '-Wall']
).
Trial creates and switches into a directory named _trial_temp/
before running the tests, and sends the twisted log (which includes all exceptions) to a file named test.log
.
This file will be pulled up to the master where it can be seen as part of the status output.
from buildbot.plugins import steps
f.addStep(steps.Trial(tests='petmail.test'))
Trial has the ability to run tests on several workers in parallel (beginning with Twisted 12.3.0).
Set jobs
to the number of workers you want to run.
Note that running trial in this way will create multiple log files (named test.N.log
, err.N.log
and out.N.log
starting with N=0
) rather than a single test.log
.
This step takes the following arguments:
jobs
- (optional) Number of worker-resident trial workers to use when running the tests. Defaults to 1 worker. Only works with Twisted>=12.3.0.
RemovePYCs¶
-
class
buildbot.steps.python_twisted.
RemovePYCs
¶
This is a simple built-in step that will remove .pyc
files from the workdir.
This is useful in builds that update their source (and thus do not automatically delete .pyc
files) but where some part of the build process is dynamically searching for Python modules.
Notably, trial has a bad habit of finding old test modules.
from buildbot.plugins import steps
f.addStep(steps.RemovePYCs())
2.5.10.7. Transferring Files¶
-
class
buildbot.steps.transfer.
FileUpload
¶
-
class
buildbot.steps.transfer.
FileDownload
¶
Most of the work involved in a build will take place on the worker.
But occasionally it is useful to do some work on the buildmaster side.
The most basic way to involve the buildmaster is simply to move a file from the worker to the master, or vice versa.
There are a pair of steps named FileUpload
and FileDownload
to provide this functionality.
FileUpload
moves a file up to the master, while FileDownload
moves a file down from the master.
As an example, let’s assume that there is a step which produces an HTML file within the source tree that contains some sort of generated project documentation.
And let’s assume that we run nginx web server on buildmaster host for serving static files.
We want to move this file to the buildmaster, into a /usr/share/nginx/www/
directory, so it can be visible to developers.
This file will wind up in the worker-side working directory under the name docs/reference.html
.
We want to put it into the master-side /usr/share/nginx/www/ref.html
, and add a link to the HTML status to the uploaded file.
from buildbot.plugins import steps
f.addStep(steps.ShellCommand(command=["make", "docs"]))
f.addStep(steps.FileUpload(workersrc="docs/reference.html",
masterdest="/usr/share/nginx/www/ref.html",
url="http://somesite/~buildbot/ref.html"))
The masterdest=
argument will be passed to os.path.expanduser
, so things like ~
will be expanded properly.
Non-absolute paths will be interpreted relative to the buildmaster’s base directory.
Likewise, the workersrc=
argument will be expanded and interpreted relative to the builder’s working directory.
Note
The copied file will have the same permissions on the master as on the worker, look at the mode=
parameter to set it differently.
To move a file from the master to the worker, use the FileDownload
command.
For example, let’s assume that some step requires a configuration file that, for whatever reason, could not be recorded in the source code repository or generated on the worker side:
from buildbot.plugins import steps
f.addStep(steps.FileDownload(mastersrc="~/todays_build_config.txt",
workerdest="build_config.txt"))
f.addStep(steps.ShellCommand(command=["make", "config"]))
Like FileUpload
, the mastersrc=
argument is interpreted relative to the buildmaster’s base directory, and the workerdest=
argument is relative to the builder’s working directory.
If the worker is running in ~worker
, and the builder’s builddir
is something like tests-i386
, then the workdir is going to be ~worker/tests-i386/build
, and a workerdest=
of foo/bar.html
will get put in ~worker/tests-i386/build/foo/bar.html
.
Both of these commands will create any missing intervening directories.
Other Parameters¶
The maxsize=
argument lets you set a maximum size for the file to be transferred.
This may help to avoid surprises: transferring a 100MB coredump when you were expecting to move a 10kB status file might take an awfully long time.
The blocksize=
argument controls how the file is sent over the network: larger blocksizes are slightly more efficient but also consume more memory on each end, and there is a hard-coded limit of about 640kB.
The mode=
argument allows you to control the access permissions of the target file, traditionally expressed as an octal integer.
The most common value is probably 0755
, which sets the x executable bit on the file (useful for shell scripts and the like).
The default value for mode=
is None
, which means the permission bits will default to whatever the umask of the writing process is.
The default umask tends to be fairly restrictive, but at least on the worker you can make it less restrictive with a --umask
command-line option at creation time (Worker Options).
The keepstamp=
argument is a boolean that, when True
, forces the modified and accessed time of the destination file to match the times of the source file.
When False
(the default), the modified and accessed times of the destination file are set to the current time on the buildmaster.
The url=
argument allows you to specify an url that will be displayed in the HTML status.
The title of the url will be the name of the item transferred (directory for DirectoryUpload
or file for FileUpload
).
This allows the user to add a link to the uploaded item if that one is uploaded to an accessible place.
For FileUpload
, the urlText=
argument allows you to specify the url title that will be displayed in the web UI.
Transferring Directories¶
-
class
buildbot.steps.transfer.
DirectoryUpload
¶
To transfer complete directories from the worker to the master, there is a BuildStep
named DirectoryUpload
.
It works like FileUpload
, just for directories.
However it does not support the maxsize
, blocksize
and mode
arguments.
As an example, let’s assume an generated project documentation, which consists of many files (like the output of doxygen or epydoc).
And let’s assume that we run nginx web server on buildmaster host for serving static files.
We want to move the entire documentation to the buildmaster, into a /usr/share/nginx/www/docs
directory, and add a link to the uploaded documentation on the HTML status page.
On the worker-side the directory can be found under docs
:
from buildbot.plugins import steps
f.addStep(steps.ShellCommand(command=["make", "docs"]))
f.addStep(steps.DirectoryUpload(workersrc="docs",
masterdest="/usr/share/nginx/www/docs",
url="~buildbot/docs"))
The DirectoryUpload
step will create all necessary directories and transfers empty directories, too.
The maxsize
and blocksize
parameters are the same as for FileUpload
, although note that the size of the transferred data is implementation-dependent, and probably much larger than you expect due to the encoding used (currently tar).
The optional compress
argument can be given as 'gz'
or 'bz2'
to compress the datastream.
Note
The permissions on the copied files will be the same on the master as originally on the worker, see option buildbot-worker create-worker --umask
to change the default one.
Transferring Multiple Files At Once¶
-
class
buildbot.steps.transfer.
MultipleFileUpload
¶
In addition to the FileUpload
and DirectoryUpload
steps there is the MultipleFileUpload
step for uploading a bunch of files (and directories) in a single BuildStep
.
The step supports all arguments that are supported by FileUpload
and DirectoryUpload
, but instead of a the single workersrc
parameter it takes a (plural) workersrcs
parameter.
This parameter should either be a list, something that can be rendered as a list or a string which will be converted to a list.
Additionally it supports the glob
parameter if this parameter is set to True
all arguments in workersrcs
will be parsed through glob
and the results will be uploaded to masterdest
.:
from buildbot.plugins import steps
f.addStep(steps.ShellCommand(command=["make", "test"]))
f.addStep(steps.ShellCommand(command=["make", "docs"]))
f.addStep(steps.MultipleFileUpload(workersrcs=["docs", "test-results.html"],
masterdest="/usr/share/nginx/www/",
url="~buildbot"))
The url=
parameter, can be used to specify a link to be displayed in the HTML status of the step.
The way URLs are added to the step can be customized by extending the MultipleFileUpload
class.
The allUploadsDone method is called after all files have been uploaded and sets the URL.
The uploadDone method is called once for each uploaded file and can be used to create file-specific links.
import os
from buildbot.plugins import steps
class CustomFileUpload(steps.MultipleFileUpload):
linkTypes = ('.html', '.txt')
def linkFile(self, basename):
name, ext = os.path.splitext(basename)
return ext in self.linkTypes
def uploadDone(self, result, source, masterdest):
if self.url:
basename = os.path.basename(source)
if self.linkFile(basename):
self.addURL(self.url + '/' + basename, basename)
def allUploadsDone(self, result, sources, masterdest):
if self.url:
notLinked = [src for src in sources if not self.linkFile(src)]
numFiles = len(notLinked)
if numFiles:
self.addURL(self.url, '... %d more' % numFiles)
2.5.10.8. Transferring Strings¶
-
class
buildbot.steps.transfer.
StringDownload
¶
-
class
buildbot.steps.transfer.
JSONStringDownload
¶
-
class
buildbot.steps.transfer.
JSONPropertiesDownload
¶
Sometimes it is useful to transfer a calculated value from the master to the worker. Instead of having to create a temporary file and then use FileDownload, you can use one of the string download steps.
from buildbot.plugins import steps, util
f.addStep(steps.StringDownload(util.Interpolate("%(src::branch)s-%(prop:got_revision)s\n"),
workerdest="buildid.txt"))
StringDownload
works just like FileDownload
except it takes a single argument, s
, representing the string to download instead of a mastersrc
argument.
from buildbot.plugins import steps
buildinfo = {
'branch': Property('branch'),
'got_revision': Property('got_revision')
}
f.addStep(steps.JSONStringDownload(buildinfo, workerdest="buildinfo.json"))
JSONStringDownload
is similar, except it takes an o
argument, which must be JSON serializable, and transfers that as a JSON-encoded string to the worker.
from buildbot.plugins import steps
f.addStep(steps.JSONPropertiesDownload(workerdest="build-properties.json"))
JSONPropertiesDownload
transfers a json-encoded string that represents a dictionary where properties maps to a dictionary of build property name
to property value
; and sourcestamp
represents the build’s sourcestamp.
2.5.10.9. Running Commands on the Master¶
-
class
buildbot.steps.master.
MasterShellCommand
¶
Occasionally, it is useful to execute some task on the master, for example to create a directory, deploy a build result, or trigger some other centralized processing.
This is possible, in a limited fashion, with the MasterShellCommand
step.
This step operates similarly to a regular ShellCommand
, but executes on the master, instead of the worker.
To be clear, the enclosing Build
object must still have a worker object, just as for any other step – only, in this step, the worker does not do anything.
In this example, the step renames a tarball based on the day of the week.
from buildbot.plugins import steps
f.addStep(steps.FileUpload(workersrc="widgetsoft.tar.gz",
masterdest="/var/buildoutputs/widgetsoft-new.tar.gz"))
f.addStep(steps.MasterShellCommand(
command="mv widgetsoft-new.tar.gz widgetsoft-`date +%a`.tar.gz",
workdir="/var/buildoutputs"))
Note
By default, this step passes a copy of the buildmaster’s environment variables to the subprocess.
To pass an explicit environment instead, add an env={..}
argument.
Environment variables constructed using the env
argument support expansion so that if you just want to prepend /home/buildbot/bin
to the PATH
environment variable, you can do it by putting the value ${PATH}
at the end of the value like in the example below.
Variables that don’t exist on the master will be replaced by ""
.
from buildbot.plugins import steps
f.addStep(steps.MasterShellCommand(
command=["make", "www"],
env={'PATH': ["/home/buildbot/bin",
"${PATH}"]}))
Note that environment values must be strings (or lists that are turned into strings).
In particular, numeric properties such as buildnumber
must be substituted using Interpolate.
workdir
- (optional) The directory from which the command will be ran.
interruptSignal
- (optional) Signal to use to end the process, if the step is interrupted.
LogRenderable¶
-
class
buildbot.steps.master.
LogRenderable
¶
This build step takes content which can be renderable and logs it in a pretty-printed format. It can be useful for debugging properties during a build.
2.5.10.10. Setting Properties¶
These steps set properties on the master based on information from the worker.
SetProperty¶
-
class
buildbot.steps.master.
SetProperty
¶
SetProperty
takes two arguments of property
and value
where the value
is to be assigned to the property
key.
It is usually called with the value
argument being specified as a Interpolate object which allows the value to be built from other property values:
from buildbot.plugins import steps, util
f.addStep(
steps.SetProperty(
property="SomeProperty",
value=util.Interpolate("sch=%(prop:scheduler)s, worker=%(prop:workername)s")
)
)
SetProperties
step¶
-
class
buildbot.steps.master.
SetProperty
SetProperties
takes a dictionary to be turned into build properties.
It is similar to SetProperty
, and meant to be used with a Renderer function or a dictionary of Interpolate objects which allows the value to be built from other property values:
"""Example borrowed from Julia's master.cfg
https://github.com/staticfloat/julia-buildbot (MIT)"""
from buildbot.plugins import *
@util.renderer
def compute_artifact_filename(props):
# Get the output of the `make print-BINARYDIST_FILENAME` step
reported_filename = props.getProperty('artifact_filename')
# First, see if we got a BINARYDIST_FILENAME output
if reported_filename[:26] == "BINARYDIST_FILENAME=":
local_filename = util.Interpolate(reported_filename[26:].strip()+"%(prop:os_pkg_ext)s")
else:
# If not, use non-sf/consistent_distnames naming
if is_mac(props):
local_filename = util.Interpolate("contrib/mac/app/Julia-%(prop:version)s-%(prop:shortcommit)s.%(prop:os_pkg_ext)s")
elif is_winnt(props):
local_filename = util.Interpolate("julia-%(prop:version)s-%(prop:tar_arch)s.%(prop:os_pkg_ext)s")
else:
local_filename = util.Interpolate("julia-%(prop:shortcommit)s-Linux-%(prop:tar_arch)s.%(prop:os_pkg_ext)s")
# upload_filename always follows sf/consistent_distname rules
upload_filename = util.Interpolate("julia-%(prop:shortcommit)s-%(prop:os_name)s%(prop:bits)s.%(prop:os_pkg_ext)s")
return {
"local_filename": local_filename
"upload_filename": upload_filename
}
f1.addStep(steps.SetProperties(properties=compute_artifact_filename))
SetPropertyFromCommand¶
-
class
buildbot.steps.shell.
SetPropertyFromCommand
¶
This buildstep is similar to ShellCommand
, except that it captures the output of the command into a property.
It is usually used like this:
from buildbot.plugins import steps
f.addStep(steps.SetPropertyFromCommand(command="uname -a", property="uname"))
This runs uname -a
and captures its stdout, stripped of leading and trailing whitespace, in the property uname
.
To avoid stripping, add strip=False
.
The property
argument can be specified as a Interpolate object, allowing the property name to be built from other property values.
Passing includeStdout=False
(default True
) stops capture from stdout.
Passing includeStderr=True
(default False
) allows capture from stderr.
The more advanced usage allows you to specify a function to extract properties from the command output.
Here you can use regular expressions, string interpolation, or whatever you would like.
In this form, extract_fn
should be passed, and not Property
.
The extract_fn
function is called with three arguments: the exit status of the command, its standard output as a string, and its standard error as a string.
It should return a dictionary containing all new properties.
Note that passing in extract_fn
will set includeStderr
to True
.
def glob2list(rc, stdout, stderr):
jpgs = [l.strip() for l in stdout.split('\n')]
return {'jpgs': jpgs}
f.addStep(SetPropertyFromCommand(command="ls -1 *.jpg", extract_fn=glob2list))
Note that any ordering relationship of the contents of stdout and stderr is lost. For example, given:
f.addStep(SetPropertyFromCommand(
command="echo output1; echo error >&2; echo output2",
extract_fn=my_extract))
Then my_extract
will see stdout="output1\noutput2\n"
and stderr="error\n"
.
Avoid using the extract_fn
form of this step with commands that produce a great deal of output, as the output is buffered in memory until complete.
-
class
buildbot.steps.worker.
SetPropertiesFromEnv
¶
SetPropertiesFromEnv¶
Buildbot workers (later than version 0.8.3) provide their environment variables to the master on connect.
These can be copied into Buildbot properties with the SetPropertiesFromEnv
step.
Pass a variable or list of variables in the variables
parameter, then simply use the values as properties in a later step.
Note that on Windows, environment variables are case-insensitive, but Buildbot property names are case sensitive.
The property will have exactly the variable name you specify, even if the underlying environment variable is capitalized differently.
If, for example, you use variables=['Tmp']
, the result will be a property named Tmp
, even though the environment variable is displayed as TMP
in the Windows GUI.
from buildbot.plugins import steps, util
f.addStep(steps.SetPropertiesFromEnv(variables=["SOME_JAVA_LIB_HOME", "JAVAC"]))
f.addStep(steps.Compile(commands=[util.Interpolate("%(prop:JAVAC)s"),
"-cp",
util.Interpolate("%(prop:SOME_JAVA_LIB_HOME)s")]))
Note that this step requires that the worker be at least version 0.8.3. For previous versions, no environment variables are available (the worker environment will appear to be empty).
2.5.10.11. Triggering Schedulers¶
-
class
buildbot.steps.trigger.
Trigger
¶
The counterpart to the Triggerable
scheduler is the Trigger
build step:
from buildbot.plugins import steps
f.addStep(steps.Trigger(schedulerNames=['build-prep'],
waitForFinish=True,
updateSourceStamp=True,
set_properties={ 'quick' : False }))
The SourceStamps to use for the triggered build are controlled by the arguments updateSourceStamp
, alwaysUseLatest
, and sourceStamps
.
Hyperlinks are added to the build detail web pages for each triggered build.
schedulerNames
lists the
Triggerable
schedulers that should be triggered when this step is executed.Note
It is possible, but not advisable, to create a cycle where a build continually triggers itself, because the schedulers are specified by name.
unimportantSchedulerNames
- When
waitForFinish
isTrue
, all schedulers in this list will not cause the trigger step to fail. unimportantSchedulerNames must be a subset of schedulerNames IfwaitForFinish
isFalse
, unimportantSchedulerNames will simply be ignored. waitForFinish
If
True
, the step will not finish until all of the builds from the triggered schedulers have finished.If
False
(the default) or not given, then the buildstep succeeds immediately after triggering the schedulers.updateSourceStamp
If
True
(the default), then step updates the source stamps given to theTriggerable
schedulers to includegot_revision
(the revision actually used in this build) asrevision
(the revision to use in the triggered builds). This is useful to ensure that all of the builds use exactly the same source stamps, even if otherChange
s have occurred while the build was running.If
False
(and neither of the other arguments are specified), then the exact same SourceStamps are used.alwaysUseLatest
- If
True
, then no SourceStamps are given, corresponding to using the latest revisions of the repositories specified in the Source steps. This is useful if the triggered builds use to a different source repository. sourceStamps
- Accepts a list of dictionaries containing the keys
branch
,revision
,repository
,project
, and optionallypatch_level
,patch_body
,patch_subdir
,patch_author
andpatch_comment
and creates the corresponding SourceStamps. If only one sourceStamp has to be specified then the argumentsourceStamp
can be used for a dictionary containing the keys mentioned above. The argumentsupdateSourceStamp
,alwaysUseLatest
, andsourceStamp
can be specified using properties. set_properties
allows control of the properties that are passed to the triggered scheduler. The parameter takes a dictionary mapping property names to values. You may use Interpolate here to dynamically construct new property values. For the simple case of copying a property, this might look like:
set_properties={"my_prop1" : Property("my_prop1"), "my_prop2" : Property("my_prop2")}
where
Property
is an instance ofbuildbot.process.properties.Property
Note
The
copy_properties
parameter, given a list of properties to copy into the new build request, has been deprecated in favor of explicit use ofset_properties
.
Dynamic Trigger¶
Sometimes it is desirable to select which scheduler to trigger, and which properties to set dynamically, at the time of the build.
For this purpose, Trigger step supports a method that you can customize in order to override statically defined schedulernames
, set_properties
and optionally unimportant
.
-
buildbot.steps.source.
getSchedulersAndProperties
()¶ Returns: list of dictionaries containing the keys ‘sched_name’, ‘props_to_set’ and ‘unimportant’ optionally via deferred This method returns a list of dictionaries describing what scheduler to trigger, with which properties and if the scheduler is unimportant. Old style list of tuples is still supported, in which case unimportant is considered
False
. The properties should already be rendered (ie, concrete value, not objects wrapped byInterpolate
orProperty
). Since this function happens at build-time, the property values are available from the step and can be used to decide what schedulers or properties to use.With this method, you can also trigger the same scheduler multiple times with different set of properties. The sourcestamp configuration is however the same for each triggered build request.
2.5.10.13. Debian Build Steps¶
DebPbuilder¶
The DebPbuilder
step builds Debian packages within a chroot built by pbuilder.
It populates the chroot with a basic system and the packages listed as build requirement.
The type of the chroot to build is specified with the distribution
, distribution
and mirror
parameter.
To use pbuilder your Buildbot user must have the right to run pbuilder as root using sudo.
from buildbot.plugins import steps
f.addStep(steps.DebPbuilder())
The step takes the following parameters
architecture
- Architecture to build chroot for.
distribution
- Name, or nickname, of the distribution. Defaults to ‘stable’.
basetgz
- Path of the basetgz to use for building.
mirror
- URL of the mirror used to download the packages from.
extrapackages
- List if packages to install in addition to the base system.
keyring
- Path to a gpg keyring to verify the downloaded packages. This is necessary if you build for a foreign distribution.
components
- Repos to activate for chroot building.
DebCowbuilder¶
The DebCowbuilder
step is a subclass of DebPbuilder
, which use cowbuilder instead of pbuilder.
DebLintian¶
The DebLintian
step checks a build .deb for bugs and policy violations.
The packages or changes file to test is specified in fileloc
from buildbot.plugins import steps, util
f.addStep(steps.DebLintian(fileloc=util.Interpolate("%(prop:deb-changes)s")))
2.5.10.14. Miscellaneous BuildSteps¶
A number of steps do not fall into any particular category.
HLint¶
The HLint
step runs Twisted Lore, a lint-like checker over a set of .xhtml
files.
Any deviations from recommended style is flagged and put in the output log.
The step looks at the list of changes in the build to determine which files to check - it does not check all files.
It specifically excludes any .xhtml
files in the top-level sandbox/
directory.
The step takes a single, optional, parameter: python
.
This specifies the Python executable to use to run Lore.
from buildbot.plugins import steps
f.addStep(steps.HLint())
MaxQ¶
MaxQ (http://maxq.tigris.org/) is a web testing tool that allows you to record HTTP sessions and play them back.
The MaxQ
step runs this framework.
from buildbot.plugins import steps
f.addStep(steps.MaxQ(testdir='tests/'))
The single argument, testdir
, specifies where the tests should be run.
This directory will be passed to the run_maxq.py
command, and the results analyzed.
HTTP Requests¶
Using the HTTPStep
step, it is possible to perform HTTP requests in order to trigger another REST service about the progress of the build.
Note
This step requires the txrequests and requests Python libraries.
The parameters are the following:
url
- (mandatory) The URL where to send the request
method
- The HTTP method to use (out of
POST
,GET
,PUT
,DELETE
,HEAD
orOPTIONS
), default toPOST
. params
- Dictionary of URL parameters to append to the URL.
data
- The body to attach the request. If a dictionary is provided, form-encoding will take place.
headers
- Dictionary of headers to send.
other params
Any other keywords supported by the
requests
api can be passed to this step.Note
The entire Buildbot master process shares a single Requests
Session
object. This has the advantage of supporting connection re-use and other HTTP/1.1 features. However, it also means that any cookies or other state changed by one step will be visible to other steps, causing unexpected results. This behavior may change in future versions.
When the method is known in advance, class with the name of the method can also be used. In this case, it is not necessary to specify the method.
Example:
from buildbot.plugins import steps, util
f.addStep(steps.POST('http://myRESTService.example.com/builds',
data = {
'builder': util.Property('buildername'),
'buildnumber': util.Property('buildnumber'),
'workername': util.Property('workername'),
'revision': util.Property('got_revision')
}))