You are on page 1of 2


Measuring maximum sustained transaction throughput

on a global network of Bitcoin nodes
Andrea Suisani,1 Andrew Clifford,1 Andrew Stone,1 Erik Beijnoff,1 Peter Rizun,1 Peter
Tschipper,1 Alexandra Fedorova,2 Chen Feng,2 Victoria Lemieux,2 Stefan Matthews3
Bitcoin Unlimited, 2 University of British Columbia, 3 nChain

Although it is well understood that increasing Bitcoins block size limit (currently 1 MB)
would immediately reduce transaction fees and improve confirmation reliability, concern
exists regarding the networks ability to safely and reliably handle the associated increase in
transaction throughput.
To investigate this concern, we set up a global network of Bitcoin mining nodes1
configured to accept blocks up to one thousand times larger (1 GB) than the current limit. To
those nodes we connected transaction generators, each capable of generating and
broadcasting 200 transactions per second (tx/sec) sustained. 2 We performed (and are
continuing to perform) a series of ramps, where the transaction generators were
programmed to increase their generation rate following an exponential curve starting at 1
tx/sec and concluding at 1000 tx/secas illustrated in Fig. 1to identify bottlenecks and
measure performance statistics.

Fig. 1. Ramp input and typical node response.

At the time of writing, there were mining nodes in Toronto (64 GB, 20 core VPS), Frankfurt (16 GB, 8 core VPS),
Munich (64 GB, 10-core rack-mounted server with 1 TB SSD), Stockholm (64 GB, 4 core desktop with 500 GB SSD),
and central Washington State (16 GB, 4 core desktop). With the passing of BUIP065 and the associated $300,000 per
year funding for the Gigablock Testnet Initiative, additional mining nodes will be deployed in Beijing, Bangalore, Sao
Paulo, Sydney and Vancouver. The results we present at Stanford will include data from this larger test network as
At the time of writing, there were generators in San Francisco, New York, London, Amsterdam, Singapore and
Bangalore (all 8 GB, 4 core VPS). Generators are Python applications interacting with a local instance of bitcoind.
What the audience will learn
We would like to give a PowerPoint talk about this research, and we request a slot between
30 45 minutes in length. The audience will learn about:
- How we set up this experiment and the challenges we faced. For example:
o The challenges involved in creating and broadcasting 1,000 tx/sec sustained
o Bugs in the Bitcoin client that must be fixed and configuration parameters
that must be adjusted to facilitate very-high levels of transaction throughput.
o The hardware requirements for mining and non-mining nodes to support these
levels of transaction throughput.4
o Things our experiment did not test for (e.g., the UTXO set was not controlled
and likely was significantly smaller than it would be in a realistic situation
with very-high levels of transaction throughput).
- Empirical relationships observed, including:
o Block propagation and block verification times versus block size and versus
mempool coherence.
o Xthin block and Bloom filter sizes versus block size and versus mempool
o Network orphan rates versus transaction throughput.
- Bottlenecks identified and how they might be fixed, for example:
o How admitting transactions into mempool, due to the single-threaded nature
of the present design, appears to be the dominant bottleneck.
o How upon saturation of mempool admission, mempool decoherence between
nodes quickly grows, and Xthin compression ratios decrease, resulting in less
efficient block propagation, thereby further degrading node performance.
o (Note that very large blocks have been efficiently propagated and verified
with Xthin prior to mempool decoherence; block propagation/verification
does not appear to be the bottleneck).5
- The Gigablock Testnet Initiative more generally, including how to get involved!

Further reading
BUIP 065: Gigablock Testnet Initiative (1 Sep 2017)
Gigablock Testnet: Experiment #1 (draft test plan)

For example, the native bitcoin wallet becomes sluggish when managing wallets with many unspent outputs and key
pairs, and is thus not suitable for high-volume transaction generator. A custom Python application was written instead
to manage the wallets unspent output and key pairs, for millions of key pairs
E.g., nodes with less than 8 GB RAM often crashed due to insufficient memory prior to hitting the mempool
admission bottleneck.
At the time of writing, the five largest blocks during a ramp were, in descending order:
0.262 GB @ 55X compression (00000000e73ae82744e9fb940e6c0dc3d40c4338229ee4088030b3feda23510a)
0.244 GB @ 38X compression (00000000003baeb743f31b0e325bf44b7d23c3b235a8e9a24c4b19be4f0211e40)
0.206 GB @ 1.2X compression (00000000adae088a27fbbdb73818e129189fbf9c2e5eae14fe29dd77a1214b62)
0.102 GB @ 54X compression (0000000060eb9edf1b516ce619143d1515d5bb419add31e39443dd97e19d89b5)
0.078 GB @ 44X compression (00000000478479b0570cd1051c4feb34bd0ee27f7a246b340ca6b3ddb8412a60)