Caution
Buildbot no longer supports Python 2.7 on the Buildbot master.
3.7. Database¶
BuildBot stores most of its state in a database. This section describes the database connector classes, which allow other parts of Buildbot to access the database. It also describes how to modify the database schema and the connector classes themselves.
3.7.1. Database Overview¶
All access to the Buildbot database is mediated by database connector classes. These classes provide a functional, asynchronous interface to other parts of Buildbot, and encapsulate the database-specific details in a single location in the codebase.
The connector API, defined below, is a stable API in Buildbot, and can be
called from any other component. Given a master master
, the root of the
database connectors is available at master.db
, so, for example, the state
connector’s getState
method is master.db.state.getState
.
The connectors all use SQLAlchemy Core to achieve (almost) database-independent operation. Note that the SQLAlchemy ORM is not used in Buildbot. Database queries are carried out in threads, and report their results back to the main thread via Twisted Deferreds.
3.7.2. Schema¶
The database schema is maintained with SQLAlchemy-Migrate. This package handles the details of upgrading users between different schema versions.
The schema itself is considered an implementation detail, and may change significantly from version to version. Users should rely on the API (below), rather than performing queries against the database itself.
3.7.3. Identifier¶
Restrictions on many string fields in the database are referred to as the Identifier concept.
An “identifier” is a nonempty unicode string of limited length, containing only UTF-8 alphanumeric characters along with -
(dash) and _
(underscore), and not beginning with a digit.
Wherever an identifier is used, the documentation will give the maximum length in characters.
The function buildbot.util.identifiers.isIdentifier
is useful to verify a well-formed identifier.
3.7.4. Writing Database Connector Methods¶
The information above is intended for developers working on the rest of Buildbot, and treating the database layer as an abstraction. The remainder of this section describes the internals of the database implementation, and is intended for developers modifying the schema or adding new methods to the database layer.
Warning
It’s difficult to change the database schema significantly after it has been released, and very disruptive to users to change the database API. Consider very carefully the future-proofing of any changes here!
3.7.4.1. The DB Connector and Components¶
-
class
buildbot.db.connector.
DBConnector
¶ The root of the database connectors,
master.db
, is aDBConnector
instance. Its main purpose is to hold reference to each of the connector components, but it also handles timed cleanup tasks.If you are adding a new connector component, import its module and create an instance of it in this class’s constructor.
-
class
buildbot.db.base.
DBConnectorComponent
¶ This is the base class for connector components.
There should be no need to override the constructor defined by this base class.
-
db
¶ A reference to the
DBConnector
, so that connector components can use e.g.,self.db.pool
orself.db.model
. In the unusual case that a connector component needs access to the master, the easiest path isself.db.master
.
-
checkLength
(col, value)¶ For use by subclasses to check that ‘value’ will fit in ‘col’, where ‘col’ is a table column from the model. Ignore this check for database engines that either provide this error themselves (postgres) or that do not enforce maximum-length restrictions (sqlite)
-
findSomethingId
(self, tbl, whereclause, insert_values, _race_hook=None, autoCreate=True)¶ Find (using
whereclause
) or add (usinginsert_values
) a row totable
, and return the resulting ID. IfautoCreate
== False, we will not automatically insert the row.
-
hashColumns
(*args)¶ Hash the given values in a consistent manner: None is represented as xf5, an invalid unicode byte; strings are converted to utf8; and integers are represented by their decimal expansion. The values are then joined by ‘0’ and hashed with sha1.
-
doBatch
(batch, batch_n=500)¶ returns an Iterator that batches stuff in order to not push to many thing in a single request. Especially sqlite has 999 limit on argument it can take in a requests.
-
3.7.4.2. Direct Database Access¶
The connectors all use SQLAlchemy Core as a wrapper around database
client drivers. Unfortunately, SQLAlchemy is a synchronous library, so some
extra work is required to use it in an asynchronous context like Buildbot.
This is accomplished by deferring all database operations to threads, and
returning a Deferred. The Pool
class takes care of
the details.
A connector method should look like this:
def myMethod(self, arg1, arg2):
def thd(conn):
q = ... # construct a query
for row in conn.execute(q):
... # do something with the results
return ... # return an interesting value
return self.db.pool.do(thd)
Picking that apart, the body of the method defines a function named thd
taking one argument, a Connection
object. It then calls
self.db.pool.do
, passing the thd
function. This function is called in
a thread, and can make blocking calls to SQLAlchemy as desired. The do
method will return a Deferred that will fire with the return value of thd
,
or with a failure representing any exceptions raised by thd
.
The return value of thd
must not be an SQLAlchemy object - in particular,
any ResultProxy
objects must be parsed into lists or other data structures before they are
returned.
Warning
As the name thd
indicates, the function runs in a thread. It should
not interact with any other part of Buildbot, nor with any of the Twisted
components that expect to be accessed from the main thread – the reactor,
Deferreds, etc.
Queries can be constructed using any of the SQLAlchemy core methods, using
tables from Model
, and executed with the connection
object, conn
.
Note
SQLAlchemy requires the use of a syntax that is forbidden by pep8. If in where clauses you need to select rows where a value is NULL, you need to write (tbl.c.value == None). This form is forbidden by pep8 which requires the use of is None instead of == None. As sqlalchemy is using operator overloading to implement pythonic SQL statements, and is operator is not overloadable, we need to keep the == operators. In order to solve this issue, buildbot uses buildbot.db.NULL constant, which is None. So instead of writing tbl.c.value == None, please write tbl.c.value == NULL)
-
class
buildbot.db.pool.
DBThreadPool
¶ -
do
(callable, ...)¶ - Returns
Deferred
Call
callable
in a thread, with aConnection
object as first argument. Returns a deferred that will fire with the results of the callable, or with a failure representing any exception raised during its execution.Any additional positional or keyword arguments are passed to
callable
.
-
3.7.4.3. Database Schema¶
Database connector methods access the database through SQLAlchemy, which requires access to Python objects representing the database tables. That is handled through the model.
-
class
buildbot.db.model.
Model
¶ This class contains the canonical description of the buildbot schema, It is presented in the form of SQLAlchemy
Table
instances, as class variables. At runtime, the model is available atmaster.db.model
, so for example thebuildrequests
table can be referred to asmaster.db.model.buildrequests
, and columns are available in itsc
attribute.The source file, master/buildbot/db/model.py, contains comments describing each table; that information is not replicated in this documentation.
Note that the model is not used for new installations or upgrades of the Buildbot database. See Modifying the Database Schema for more information.
-
metadata
¶ The model object also has a
metadata
attribute containing aMetaData
instance. Connector methods should not need to access this object. The metadata is not bound to an engine.
The
Model
class also defines some migration-related methods:-
is_current
()¶ - Returns
boolean via Deferred
Returns true if the current database’s version is current.
-
upgrade
()¶ - Returns
Deferred
Upgrades the database to the most recent schema version.
-
3.7.4.4. Caching¶
Connector component methods that get an object based on an ID are good
candidates for caching. The cached
decorator
makes this automatic:
-
buildbot.db.base.
cached
(cachename)¶ - Parameters
cache_name – name of the cache to use
A decorator for “getter” functions that fetch an object from the database based on a single key. The wrapped method will only be called if the named cache does not contain the key.
The wrapped function must take one argument (the key); the wrapper will take a key plus an optional
no_cache
argument which, if true, will cause it to invoke the underlying method even if the key is in the cache.The resulting method will have a
cache
attribute which can be used to access the underlying cache.
In most cases, getter methods return a well-defined dictionary. Unfortunately,
Python does not handle weak references to bare dictionaries, so components must
instantiate a subclass of dict
. The whole assembly looks something like
this:
class ThDict(dict):
pass
class ThingConnectorComponent(base.DBConnectorComponent):
@base.cached('thdicts')
def getThing(self, thid):
def thd(conn):
...
thdict = ThDict(thid=thid, attr=row.attr, ...)
return thdict
return self.db.pool.do(thd)
3.7.4.5. Tests¶
It goes without saying that any new connector methods must be fully tested!
You will also want to add an in-memory implementation of the methods to the
fake classes in master/buildbot/test/fake/fakedb.py
. Non-DB Buildbot code
is tested using these fake implementations in order to isolate that code from
the database code, and to speed-up tests.
The keys and types used in the return value from a connector’s get
methods are described in master/buildbot/test/util/validation.py, via the dbdict
module-level value.
This is a dictionary of DictValidator
objects, one for each return value.
These values are used within test methods like this:
rv = yield self.db.masters.getMaster(7)
validation.verifyDbDict(self, 'masterdict', rv)
3.7.5. Modifying the Database Schema¶
Changes to the schema are accomplished through migration scripts, supported by SQLAlchemy-Migrate. In fact, even new databases are created with the migration scripts – a new database is a migrated version of an empty database.
The schema is tracked by a version number, stored in the migrate_version
table. This number is incremented for each change to the schema, and used to
determine whether the database must be upgraded. The master will refuse to run
with an out-of-date database.
To make a change to the schema, first consider how to handle any existing data. When adding new columns, this may not be necessary, but table refactorings can be complex and require caution so as not to lose information.
Create a new script in master/buildbot/db/migrate/versions, following the numbering scheme already present.
The script should have an update
method, which takes an engine as a parameter, and upgrades the database, both changing the schema and performing any required data migrations.
The engine passed to this parameter is “enhanced” by SQLAlchemy-Migrate, with methods to handle adding, altering, and dropping columns.
See the SQLAlchemy-Migrate documentation for details.
Next, modify master/buildbot/db/model.py to represent the updated schema. Buildbot’s automated tests perform a rudimentary comparison of an upgraded database with the model, but it is important to check the details - key length, nullability, and so on can sometimes be missed by the checks. If the schema and the upgrade scripts get out of sync, bizarre behavior can result.
Also, adjust the fake database table definitions in master/buildbot/test/fakedb according to your changes.
Your upgrade script should have unit tests. The classes in master/buildbot/test/util/migration.py make this straightforward.
Unit test scripts should be named e.g., test_db_migrate_versions_015_remove_bad_master_objectid.py
.
The master/buildbot/test/integration/test_upgrade.py also tests
upgrades, and will confirm that the resulting database matches the model. If
you encounter implicit indexes on MySQL, that do not appear on SQLite or
Postgres, add them to implied_indexes
in
master/buidlbot/db/model.py
.
3.7.6. Foreign key checking¶
PostgreSQL and SQlite db backends are checking the foreign keys consistency. bug #2248 needs to be fixed so that we can support foreign key checking for MySQL.
To maintain consistency with real db, fakedb can check the foreign key consistency of your test data. For this, just enable it with:
self.db = fakedb.FakeDBConnector(self.master, self)
self.db.checkForeignKeys = True
Note that tests that only use fakedb do not really need foreign key consistency, even if this is a good practice to enable it in new code.
3.7.7. Database Compatibility Notes¶
Or: “If you thought any database worked right, think again”
Because Buildbot works over a wide range of databases, it is generally limited to database features present in all supported backends. This section highlights a few things to watch out for.
In general, Buildbot should be functional on all supported database backends. If use of a backend adds minor usage restrictions, or cannot implement some kinds of error checking, that is acceptable if the restrictions are well-documented in the manual.
The metabuildbot tests Buildbot against all supported databases, so most compatibility errors will be caught before a release.
3.7.7.1. Index Length in MySQL¶
MySQL only supports about 330-character indexes. The actual index length is 1000 bytes, but MySQL uses 3-byte encoding for UTF8 strings. This is a longstanding bug in MySQL - see “Specified key was too long; max key length is 1000 bytes” with utf8. While this makes sense for indexes used for record lookup, it limits the ability to use unique indexes to prevent duplicate rows.
InnoDB only supports indexes up to 255 unicode characters, which is why all indexed columns are limited to 255 characters in Buildbot.
3.7.7.2. Transactions in MySQL¶
Unfortunately, use of the MyISAM storage engine precludes real transactions in
MySQL. transaction.commit()
and transaction.rollback()
are essentially
no-ops: modifications to data in the database are visible to other users
immediately, and are not reverted in a rollback.
3.7.7.3. Referential Integrity in SQLite and MySQL¶
Neither MySQL nor SQLite enforce referential integrity based on foreign keys. Postgres does enforce, however. If possible, test your changes on Postgres before committing, to check that tables are added and removed in the proper order.
3.7.7.4. Subqueries in MySQL¶
MySQL’s query planner is easily confused by subqueries. For example, a DELETE query specifying id’s that are IN a subquery will not work. The workaround is to run the subquery directly, and then execute a DELETE query for each returned id.
If this weakness has a significant performance impact, it would be acceptable to conditionalize use of the subquery on the database dialect.
3.7.7.5. Too Many Variables in SQLite¶
Sqlite has a limitation on the number of variables it can use.
This limitation is usually SQLITE_LIMIT_VARIABLE_NUMBER=999.
There is currently no way with pysqlite to query the value of this limit.
The C-api sqlite_limit
is just not bound to the python.
When you hit this problem, you will get error like the following:
sqlalchemy.exc.OperationalError: (OperationalError) too many SQL variables
u'DELETE FROM scheduler_changes WHERE scheduler_changes.changeid IN (?, ?, ?, ..., ?)
You can use the method doBatch
in order to write batching code in a consistent manner.
3.7.8. Testing migrations with real databases¶
By default Buildbot test suite uses SQLite database for testings database
migrations.
To use other database set BUILDBOT_TEST_DB_URL
environment variable to
value in SQLAlchemy database URL specification.
For example, to run tests with file-based SQLite database you can start tests in the following way:
BUILDBOT_TEST_DB_URL=sqlite:////tmp/test_db.sqlite trial buildbot.test
3.7.8.1. Run databases in Docker¶
Docker allows to easily install and configure different databases locally in containers.
To run tests with PostgreSQL:
# Install psycopg.
pip install psycopg2
# Start container with PostgreSQL 9.5.
# It will listen on port 15432 on localhost.
sudo docker run --name bb-test-postgres -e POSTGRES_PASSWORD=password \
-p 127.0.0.1:15432:5432 -d postgres:9.5
# Start interesting tests
BUILDBOT_TEST_DB_URL=postgresql://postgres:password@localhost:15432/postgres \
trial buildbot.test
To run tests with MySQL:
# Install mysqlclient
pip install mysqlclient
# Start container with MySQL 5.5.
# It will listen on port 13306 on localhost.
sudo docker run --name bb-test-mysql -e MYSQL_ROOT_PASSWORD=password \
-p 127.0.0.1:13306:3306 -d mysql:5.5
# Start interesting tests
BUILDBOT_TEST_DB_URL=mysql+mysqldb://root:password@127.0.0.1:13306/mysql \
trial buildbot.test