Class LogFileProducer
source code
What's the plan?
the LogFile has just one FD, used for both reading and writing. Each
time you add an entry, fd.seek to the end and then write.
Each reader (i.e. Producer) keeps track of their own offset. The
reader starts by seeking to the start of the logfile, and reading
forwards. Between each hunk of file they yield chunks, so they must
remember their offset before yielding and re-seek back to that offset
before reading more data. When their read() returns EOF, they're finished
with the first phase of the reading (everything that's already been
written to disk).
After EOF, the remaining data is entirely in the current entries list.
These entries are all of the same channel, so we can do one
"".join and obtain a single chunk to be sent to the listener.
But since that involves a yield, and more data might arrive after we give
up control, we have to subscribe them before yielding. We can't subscribe
them any earlier, otherwise they'd get data out of order.
We're using a generator in the first place so that the listener can
throttle us, which means they're pulling. But the subscription means
we're pushing. Really we're a Producer. In the first phase we can be
either a PullProducer or a PushProducer. In the second phase we're only a
PushProducer.
So the client gives a LogFileConsumer to File.subscribeConsumer . This
Consumer must have registerProducer(), unregisterProducer(), and
writeChunk(), and is just like a regular twisted.interfaces.IConsumer,
except that writeChunk() takes chunks (tuples of (channel,text)) instead
of the normal write() which takes just text. The LogFileConsumer is
allowed to call stopProducing, pauseProducing, and resumeProducing on the
producer instance it is given.
|
|
|
|
|
|
|
|
|
|
|
|
|
logChunk(self,
build,
step,
logfile,
channel,
chunk) |
source code
|
|
|
|
|
paused = False
|
|
subscribed = False
|
|
BUFFERSIZE = 2048
|