You are on page 1of 28

Bitcoin Agen

An on-chain Byzantine fault tolerant service indexing network


using Bitcoin's Proof-of-Work and tamper resistant chains of
digital signatures built into the base layer.

Attila Aros - Chief Technology Of cer, MatterPool Inc.


attila@matterpool.i
Document version 1.0.

Abstrac
This paper introduces a novel service framework "Bitcoin Agent"
that allows anyone to independently operate, verify, index, and
query the blockchain in a way that is Byzantine fault tolerant and
provides a general Turing-Complete state machine that can be
independently veri ed using the native tamper resistant chain of
digital signatures built into Bitcoin satoshi token themselves. All
applications can operate trustlessly with no direct communication (if
desired), but instead act as "Byzantine listeners" whom track the
strongest proof-of-work signal for any state coordination. Indeed,
this agent is a special kind of listening Bitcoin node (ie: One which
only reads blocks and does not create them) that relies only on the
raw block header proof-of-work chainwork and the raw block data
itself which can be obtained securely and trivially from any provider.
We analyze the framework and claim this approach is capable of
easily handling 1-10 GB blocks every 10-minutes (about 3-30
million transactions) on home desktop computers simply equipped
with an internet connection with only +20 mbps bandwidth.
Additionally, we outline a sketch of a hypothetical Bitcoin digital
t

fi
o

fi
asset protocol "SimpleAsset", a non-fungible smart token and some
of the desirable properties

We will show that using the satoshi token and it's own chain of
digital signature history ("title history") is the key to sharding the
UTXO set that unlocks massive scale for businesses and are the
key to building  an on-chain  distributed veri er network and  it was
Satoshi's "vision" all along to have businesses and users
running their own (UTXO-sharded) listening nodes. It never
really hits a scaling ceiling

Note: Throughout this paper "Bitcoin" refers to the original protocol


"Bitcoin (BSV)"

Background and Problem Overvie


"The bitcoin network needs to be able to form a consensus about a
non- contradicatory subset of transactions... there may not be an
objectively best answer, but there needs to be 1 answer that is
settled upon. That is the double spending problem. And proof of
work is Bitcoin's means of coming to consensus." from "Bitcoin Stuff
- The Beacon, the Bunker, and the Faulty Node" https://
www.youtube.com/watch?v=t70iQnoxY7
Users, wallets and services today do not have a way of
maintaining  arbitrary  consensus state with  each other. Even when
relying on miners, SPV, blockchain providers, and other data
services there is only consensus at the most base primitive layer:
the satoshi token, but nothing else. They all converge just ne on
consensus regarding the  state of satoshi tokens themselves  and
the immutable timestamped transactions history. However using
"script-based" and "opreturn" data carrier techniques themselves
.

fi
w

fi
require a higher-level interpretor or state machine to "load the tape"
of data and run it forward to the next step

These script and data carrier based protocols are often referred to
as "Layer-1" and "Layer-2" smart contracts because they "sit on
top" of Bitcoin Virtual Machine (BVM) as code constructs in script or
as higher interpreted languages, merely relegating the satoshi
token layer as a data carrier or unintelligent timestamping substrate.
By ignoring the satoshi token base layer itself means that strong
consensus is not enforced by the proof-of-work mining network for
these services' state at all. It's all developers and services  hoping
silently that the counter party correctly indexed the blockchain in a
way that resembles some form of reality between themselves. Even
though they are listening to the blockchain, how do they know they
each arrived at the consensus state correctly, yet independently

In other words, the same consensus guarantees that Bitcoin miners


("Byzantaine Generals") themselves enjoy is between each other
for maintaining satoshi token state (UTXO index), is not available to
non-mining client wallets, users, and developers as of today without
a paradigm shift in thinking about the nature of ownership and
history. Namely, clients have no way of verifying application state in
a deterministic, tamper resistant way, and as a result are unable to
achieve strong consistency between their business partners, other
users and services. This is because the miners have an economic
incentive for  strong digital signature chain consistency  and
therefore  always process entire blocks and build UTXO
index so they can be absolutely sure they are not being fooled by
other potentially hostile Byzantaine Generals. Users and developers
in BitCoin have been mislead and now are averse to processing
.

block headers and raw blocks. This need not be the case as we'll
show in this paper

Are businesses and users, forever  chained  to "miners and


transaction processors" for getting their transaction histories and
"proofs"? Absolutely not! Because it is case that "businesses can
be businesses"  and user and businesses alike have the
same absolute certainty of their relevant chains of digital signatures
for not only bitcoin satoshis, but also for all types of computations.
The path forward is hinted at in the Bitcoin Whitepaper

We assume that users and businesses will want to store their entire
transaction histories and assets of interest for their customers and
partners. This assertion is made without proof, but at MatterPool
have seen almost all clients require some form of transaction
history (usually most or all of it) that they can rely on for their
internal systems, display to their customers, etc. The interesting
question is: why didn't the previous owner give them the full chain of
signatures going back to the coinbases in the rst place? Afterall, at
every step of the way each owner had the parent inputs trivially in
hand, why not just pass it directly onto the next user safely

It is left as an exercise for the reader to show that the storage and
computational cost for passing on a full UTXO chain of signatures
back to a coinbase transaction is amortized  O(n)  time and space
.

fi
.

complexity where  n  is the sum of the total assets being watched


along with their  full  UTXO chain of signatures. Losing this "title
history" and not passing it on means the new owner has to query
the miner or blockchain service for the history, wasting resources
unncessarily. This Bitcoin Agent methodology and framework
imposes zero indexing overhead above and beyond the native
processing that Bitcoin consensus mining nodes perform
themselves when updating the UTXO set. The user only stores and
process transactions that have meaning for them and does not
need to store any other part of the blockchain or UTXO set that is
not relevant to their purposes

"Driving" the point home...

Imagine buying a used car without inspecting it's full title history or


ownership, or real estate or investment contracts. Businesses
already store this in their databases and le systems today
because it is crucially important they have in-hand the entire history
for their business  to serve their customers day to day and in real-
time. In the case a business loses or "shreds" their old les, they
are available to be replayed on the blockchain for a fee. But in
practice businesses will normally keep records as long as feasible
and as long as the data has value to them. It would be ludicrous to
think that businsesses will publish their chains of transactions to the
miners, and not index or keep this valuable business information. It
costs orders of magntitude less to save coins with 1,000,000
transfer histories (just 100 MB of history) on mobile devices, home
and commercial computers than it is to save and recall that data at
a later point in time from a miner or archiver. A trip to the archiver is
good when you need old lost information, it is not so good when you
.

fi
fi
are serving millions of customers every day and need this
information readily available and pre-indexed

We will outline a solution that we've tested at MatterPool that is


a simple and powerful framework and methodology that gives
any token, smart contract or data on the blockchain the same
powers as regular satoshi tokens themselves receive and all
the Byzantine fault tolerant guarantees that come with using
native Bitcoin.

Almost everyone until now has has been acting like a  small
blocker in Bitcoin BSV, despite that fact that almost everyone has a
laptop computer that would barely even be stressing to process
1GB blocks every 10 minutes, just indexing the UTXO shard
relevant to their own needs

Symptoms that a service is not in consensus with others


include:

1. Depend on a custom (usually non-determinstic) API to


reproduce state, trusting that the "blockchain provider" will do
the right thing
2. Unable to point to a single source of state truth for their
businesss business needs*. How does a business know with
100% certainty a middleman didn't tamper with the packet
between the API service and the client
3. Worries of silent data corruption and unable to detect it without
incurring a dif cult and resource intensive effort to reconstruct
state. How to handle rollbacks and ensure (prove) replays are
idempotent and correct
fi
.

4. Unable to provide cryptographic evidence that you processed


the block's UTXO or state transition mutations correctly to
business partners or customer
5. Frequently dealing with race conditions due to complicated
systems architecture. The asynchronous nature of Bitcoin is
being fought, instead of being embraced
6. Locked in to one provider and "their way" of searching and
doing things. When really you just need the blockheaders and
the raw blocks themselves
We intend to show that all the problems can easily be solved by
taking the "Bitcoin Agent" and using the full power of native Bitcoin
satoshis and it's tamper resistant digital signature chain

It's important we rst clear up some misconceptions about what


Simpli ed Payment Veri cation (SPV) can do and cannot do and
how it relates to these problems

War and Peace: Simpli ed


Payment Veri cation (SPV
What about Simpli ed Payment Veri cation (SPV)? Doesn't that
provide us with everything we require to  prove  that a transaction
was con rmed? (ie: not double spent, in a block
fi
fi
fi
fi
fi
.

fi
s

fi
.

fi
.

In short, the only capability SPV provides is a way for users and
mining peers to have some level of assurance that their UTXOs
(transaction chains) are valid and timestamped as long as the
network is currently not in some attack condition (perhaps covertly
long running) at that point in time. In other words, SPV is meant to
be used in peacetime, not wartime.

SPV merely gives con dence level that can be measured by


comparing how much  energy  it would take to reverse the spends
back to any number of blocks. The client is able to quantity and
choose their risk-level by deciding how much energy cost they have
before considering the transaction "settled". Moving a billion dollars
fi
of value is only safe when the cost of reversing the chain of
blocks  back to before that point in time  is greater than the billion
dollars being moved in that transaction, for example

Bitcoin mining consensus nodes themselves actually have this


concept of "suf cient" energy cost built in with the requirement that
new coinbase UTXOs cannot be spent until the chain of block
headers that the coinbase transaction is included in has been
extended by 100 more headers. They cannot spend their coinbase
rewards until 100 con rmations have "elapsed". Like mining nodes,
users will also select their required con rmations  based on their
total value being transacted. Someone moving signi cant sums of
money (millions or billions perhaps) might even want 100
con rmations or even much more to be absolutely sure an attacker
or alternate chains forks do emerge spontaneously (consider
stressful economic, societal and global con ict scenarios)

Technical note: Opportunity cost (energy expenditure) in Bitcoin is


measured in units of "Dif culty". For example Dif culty=1 is a
hashrate of 7 MH/s while dif culty=2 is 14 MH/s (dif culty is
additive). This means that after about 10 minutes (on average) of
hashing at 7 MH/s then a block header nonce will be found. For a
detailed discussion about proof-of-work as a service and
opportunity cost see the Boost POW Whitepaper

The Utility of SP
SPV is a  double-spend risk mitigation technique  and provides the
following fundamental assurances

1. Client or Miner Peer can get proof that a txid existed


immutably at the time that some block header was mined that
fi
fi
fi
fi
fi
:

fi
fl
.

fi
fi
.

fi
included the txid. (Note that this could be just a single
orphaned block header at some time t
2. Client or Miner Peer can get proof of the energy cost that went
into timestamping it (and any extra energy in subsequent
headers can be calculated irrefutably
3. Client or Miner Peer can get proof of the chronological
ordering of transactions between blocks and inside each bloc
4. Client or Miner Peer can get proof of the causal (or topological)
ordering of transactions within the same block. Get 2
transactions in same block and compare their merkle trees to
deduce causal (topological) order
5. (Optional) Client can request digital signature (signed receipt)
that the miner has seen the transaction and validates it against
it's own block header chain and merkle tree storage indexes.
This is like a contract or a public committment to what their
state commitment is
There is this misconception in Bitcoin today that the "miners will do
for us" and "SPV will solve our problems" and "miners have an
economic incentive to index all my token histories". No they do not
have an "economic incentive" to  proactively  index and serve your
spend and transaction histories.  Retroactively for a fee, sure, but
there is nowhere in the whitepaper it says that token spend history
will be kept. And why would all miners do that? Perhaps some have
fast bandwidth and good CPU but do not want to operate a 10 more
football elds of storage data centers. 1 football eld for their
hashing and computational market is enough. The  only party  by
de nition that cares the most about the honest title and ownership
history of a token, satoshi, or any asset is the person that is buying
or selling the item
fi
fi
.

fi
k

SPV Myth
SPV is all that is needed to maintain a client focused trustless
payments network

FALSE:  SPV is strictly a "peacetime" measure and is less trusted


when network has a covert or known active attacker. It is a fast
probabilistic check for block inclusion of a transaction. It is preferred
and recommended that _power users, businesses, and mining
nodes all keep the transaction digital signatures of interest
themselves, especially in the case of purchase of digital assets and
investments that must be shown to link back to a minting
transaction

If I have an SPV proof which means by database is safe from


corruption

FALSE: One moment you have an SPV "receipt", another moment


you have a stale receipt that is not linked to the active chain due to
re-organization. This is acceptable as probabalistc measure of
safety for payments, but not for the ownership and history of
valueable assets themselves

Businesses prefer to have the cheapest costs and complete history


at hand for signatures and transactions histories fully intact for the
ef cient operation of their business. A "second best" is to have SPV
for the guarantees it affords to the satoshi token payments
themselves but it provides very little in it's current usage to clients
trading rare and valuable digital assets for example

Chain of Digital Signature


fi
.

Consider the situation of a car owner who bought a car to keep their
purchase papers, maintenance receipts and then passes it onto the
next person. Then the next owner will add to the records and then
pass the entire history peer to peer to the next person's wallet. This
is a simple and 100% trustless technique for passing on token
histories and digital signatures. In the case that the person lost the
history, or in the car example the prospective buyer can run a
registration "title history" check with Carfax.com or a similar
provider. This is what will happen when users do not pass on their
tokens' title histories - they must then go to a central party to serve
the data back to them. Why not just index the UTXO shard and
digital signatures that's relevant to themselves in the rst place

From the Bitcoin Whitepaper

We de ne an electronic coin as a chain of digital signatures. Each


owner transfers the coin to the next by digitally signing a hash of the
previous transaction and the public key of the next owner and
adding these to the end of the coin. A payee can verify the
signatures to verify the chain of
fi
:

fi
?

o w n e r s h i p . 

A coin (satoshi token) is de ned as a chain of digital signatures. Full


stop. It is not de ned as the UTXO set, nor is it de ned to be a
single output. It is the totality and entire tamper-resistant chain of
signatures that gives each coin it's value. It's also necessary for
Bitcoin miners and blockchain validators to beable to trace back in
the case of system failure or fraud. A coin is a chain of digital
signatures because that's the only way a payee can verify the chain
of ownership and state completely trustlessly (ie: back to coinbase
txid or some other "minting" transaction - more on that below when
we introdruce SimpleAsset

Consider a simple Non-Fungible Token (NFT) smart token that is


exchanged 10,000 times. If each output is 1kb, then that is only
1MB of history back to the minting genesis. Are we really saying
that it's infeasible for every wallet simply to just pass on the history
to every other wallet? Even with 100,000 updates or transfers that
is 10MB or about the size of a high resolution consumer photo. Why
is everyone in Bitcoin acting like they have Raspberry Pi's and
fi
)

fi
fi
unable to download or pass on say a 10MB or even 50MB chain of
signatures? Perhaps someone will receive a coin that has millions
of transactions in it's parent chain of inputs. But more likely it will
have an average number of a few hundred or thousand transfers.
Even in the scenario of 1,000,000 transfer that is only 100MB and
at current internet takes a few seconds to download

Why wouldn't a potential buyer of a media le or digital artwork not


want to inspect the few hundred or thousands of updates of it's
history to be 100% guaranteed authentic and actually legally owned
by party claiming to be selling it? We can trust a miner for SPV, but
only in peacetime, and only for the satoshi token themselves as
things stand today. It is necessary to have a ccomplete chain of
digital signatures for legal purposes as well. See  The Risks of
Segregated Witness: Problems under Evidence Laws  for a
thorough discussion about the need for businesses to keep
complete chains of digital signatures

By now it should be clear to the reader that SPV is designed as an


risk mitigation technique and a way to quickly check any of the
properties outlined above (but not the fully digital signature chain of
asset ownerships). However, the strongest form of evidence and
assurance is to simply keep the full chain of digital signatures intact
for all the data a person or business would want to keep

They can always go "to the blockchain" and scan and re-index it --
but if they bought something  valuable  in the rst place why would
the user discard their "title history"? It is like buying a nice well
maintained used car and the previous owner handing you the pile of
records and then merely discarding the records because "I can
always pay for it later". What sense is in discarding the data that the
.

fi
fi
.

seller  already has anyways? To retain the value of your asset (the
car) wouldn't it be better to keep the title history records in hand

The Whitepaper makes it clear that in the case of network stress or


discrepancies that businesses and services will (re) index the
transactions, back to their minting coinbases, to be absolutely sure
they have the full chain of history. SPV is a short-cut, but it was
always intended that businesses and services will run their own
sub-UTXO shard indexer if they accept and make payments
frequently. It says clearly in the Whitepaper that this was the case
and SPV is a useful, but strictly less preferred alternative to
verifying the entire chain. We can see this in the narrative about
SPV for  payments  use cases, but never mentioned for "asset and
token ownership" usages. That's because it is strictly inferior for that
use case and thus rarely ever mentioned in that context

One strategy to protect against this would be to accept alerts from


network nodes when they detect an invalid block, prompting the
user's software to download the full block and alerted transactions
to con rm the inconsistency - Bitcoin Whitepaper, Section 8

Bitcoin Agent Quick Theory of


Operation
fi
s

The concept is simple and can be explained like this:  lock


satoshis  in an output script that represent the  value  of the smart
contract, computation or token. The value is decided by the user
and the amount does not matter for the purpose of successful
operation. This is the original "colored coins" idea proposed by Mike
Hearn which was to actually  color the satoshi itself. Somewhere
along the lines almost everyone forgot this simple idea and
overlooked it's profound implications and it's inherent superiority to
every other blockchain that supports smart contracts, such as
Ethereum

  Source: https://www.coindesk.com/smart-property-colored-coins-
mastercoi

Before we describe the method in detail, we must rst give an


overview of the actual mechanism and data structure a Bitcoin
mining node performs itself when building up the UTXO (state)
index for veri cation. By analyzing this process, we can draw a
direct parallel with the processing done by Bitcoin Agent and then
proceed to show it is optimal (zero overhead indexing)

How Bitcoin nodes build and verify stat


An introduction to the UTXO-set is prerequisite knowledge Bitcoin’s
UTXO Set Explained to bring the reader up to speed. The following
discussion pertains speci cally to how a Bitcoin node itself
n

fi
fi
e

fi
implements the data-structures necessary to support it. This
analysis will be used to construct the formal proof that Bitcoin Agent
is storage and computational cost is optimal (zero overhead
indexing

The process that a node goes through in building up a UTXO set is


to basically keep a  Map  data structure that is used to reference
parent inputs (by "txid" and "index" -- outpoint). When the block
processor tries to process a transaction, it performs a lookup on all
the parent inputs and checks if it exists as 'unspent' in the
UTXO  Map . If it present (ie: not-spent before), then the
UTXO  Map entry is deleted completely (it is old state afterall and no
longer needed). On the other hand, if it is not present, then an
exception is thrown and the block is marked as invalid

For a block of  n transactions (assuming on average 2 inputs and 2


outputs) we can see that the number of UTXO  Map  lookups must
always be at least  O(n)  because we must check each and every
input for all transactions in the block. Notice however that the size
of the UTXO set is larger than the number of transactions in a given
block. Let w represent tthe number of unspent outputs accumulated
up until some block height h

Therefore, the run-time complexity is  O(n * logw)  where w is the


total number of unspent outputs at height h. If a Hash Index is used
(trading off run-time for more space) then we can achieve  O(n*1) =
O(n) run-time complexity instead

What about storage complexity? The storage complexity is  O(w *


logw) using a Btree because by de nition all unspent UTXO's carry
some non-zero and non-negative value which represents future
)

fi
.

revenue for a miner. We will assume we are using a Btree instead


of a Hash Index, even though a Hash Index lowers the storage
requirement to O(w)

It is important to note here that not all miners will serve full
transaction indexes (not to be confused with UTXO set index) for
historical data. In the limit miners will likely only serve the UTXO's
between themselves because there will be no need to bloat the
Bitccoin mining node and lower it's competiveness at building and
nding blocks. Anything above and beyond UTXO set management
will be provided as an extra service by the same or other miners.
Not all miners will want to store EB's of data in massive data
centers and instead will just maintain the (private) UTXO set like we
see today already

Proof that Bitcoin Agent is


Optimal (Zero Indexing
Overhead
Lemma 1. A user or service that needs to have an irrefutable chain
of digital signatures for proving the authenticity must analyze at
least  O(n) inputs where  n is the total number of spends of a coin or
asset

Lemma 2.  Each ownership transfer adds O(1) extra storage


overhead to the history of digital signatures that gets passed onto
the next owner. After  n  transactions, the latest owner
has n transactions (each owner passed on the previous history +1
fi
.

Lemma 3. A user or service that wants to maintain consensus on a


state of a smart contract, token or computation must either
proactively index the inputs ahead of time, or reactively receive and
verify the entire parent chain to be 100% certain of authenticity

Lemma 4.  SPV can be requested for only the UTXO digital
signature chain tip (ie: the last settled UTXO) to know that the entire
chain of transactions anchored back all the way to the minting event
has been successfully timestamped and accepted by the network

Side note: A user that owns a token or computational state can pass
the entire history over to the new owner, who then in turn veri es
that the chain of digital signatures is intact and correct back to
genesis. Alternatively users and businesses can proactively pattern
matching the minting transaction formats themselves (just like a
Bitcoin node itself does on the native satoshis and coinbase
transactions) which then in-turn indexes all downstream UTXO's
from that point onwards forever
The fact remains: If someone wishes to verify authenticity they must
do it proactively themselves up-front or reactively (such as when a
sender transfers a new token and it's associated history in p2p or
when instead requesting the full "title history" from the Bitcoin
blockchain miners from the archives)

Theorem:  The time and space complexity requirements for a


Bitcoin Agent to maintain consensus is identical to a Bitcoin mining
node

Because the satoshi token itself is carrying the exact value of the
minting input value, and we can trustlessly verify the chain of
signatures up the chain of UTXOs then the problem of determining
.

fi
.

the latest state of a smart contract, NFT, or other computation. Only


a UTXO Map is required for being able to successfully process a block
and accept the computation. This implies that after some number of
con rmations, all agents processing this chain of digital signatures
will arrive at the same state because the algorithm and problem of
consensus is merely reduced to verifying the input spends in
a Map data structure

The topological ordering property guarantees that if we only index


the minting transactions (or plain coinbases) then we can be sure
that all transactions of interest appear in the downstream DAG of
the outputs of these minting transactions. Since the indexer has
chosen to create a minting transaction for the purposes of state
transformation and veri cation, it follows trivially that all causally
related transactions are located directly downstream. "UTXO Shard
in a box" for every wallet, service, business at scale

Block re-organization block undo/redo information is constant. The


space and storage complexity of storing adequate block undo/redo
information is bounded because after so many con rmations (>
100) they can be pruned, setting a large constant O(1) upper bound
on storage and run-time for this "re-organization" protection

For a block of  n transactions (assuming on average 2 inputs and 2


outputs) we can see that the number of UTXO  Map  lookups must
always be at most  O(n). However when indexing special NFT's or
smart contracts -- it is not necessary to track coinbase txids
(because those are not the NFT's we wish to track). The number of
minting transactions of an NFT must be equal to or less
than  n tranactions. Therefore the number of  Map lookups must be at
fi
.

fi
.

fi
.

most O(b) where b ( Where b < n) is the size of the history of assets


under veri cation by the custodian or owner

Therefore, the run-time complexity is  O(b * logb)  where b is the


total number of unspent outputs (latest state) at height h for assets
being tracked. If a Hash Index is used (trading off run-time for more
space) then we can achieve  O(b*1) = O(b)  run-time complexity
instead

The storage complexity is  O(b * logb)  using a Btree because by


de nition all unspent UTXO's carry some non-zero and non-
negative value which represents future revenue for a miner AND
value for the business itself. We will assume we are using a Btree
instead of a Hash Index, even though a Hash Index lowers the
storage requirement to O(b)

Therefore

Run-time complexity of Bitcoin Agent compared to Bitcoin mining


node

O(n * logb) <= O(n * logw)

Where  n  is the number of transactions in the given block and  b  is


total number of unspent UTXO's in the shard.  w  is the size of all
global utxos. In practice  b << w  (and in the limit it is merely
recovering the entire global UTXO set) and this means storage
requirements only grow  with the actual needs of the business, not
the total size of the Bitcoin economy

And the storage complexity: O(b * logb) <= O(w + logw)

That completes the proof


fi
:

fi
:

Corrollary 1:  Any smart contract, token, or state machine built on


Bitcoin that does not leverage the base chain of digital signatures
must use additional indexes and storage space than the Bitcoin
node itself

Corrollary 2: Because every on-chain listener has the exact shard


of the UTXO that they all have, then they each have independantly
arrivied at the same computation. They can trivially compare the
UTXO set in  any mining node  or any other Bitcoin Agent (at leas
those that index the same subset of the shard) and inspect 100%
bit for bit correctness in state. Therefore it is trivial to publish
a  UTXO mutation set committment hash  at every block and
merely compare to each peer to know if one of the peers has failed
due to corruption or system failure and they can quickly inspect the
mutation operation updating the spend and identify the problem at a
speci c point in time trivially. The Bitcoin Agent operators can
quickly and easily identify the root and x it precisely at that point in
time with con dence once their committment hash matches
everyone else

It is not necessary to use these committment hashes, because the


state is guaranteed to be correct, in exactly the same way as a
Bitcoin node does with it's global UTXO set. However the hashes
serve as a useful checksum because bit-errors do occur and being
able to publically see that error and x it is valuable to any one
single participant

What "Bitcoin Agent" is no


Simply put, the "Bitcoin Agent" is not speci c code or tool that
everyone installs and uses together. It is a "framework" in the sense
fi
.

fi
.

fi
fi
fi
t

that anyone can easily build on on-chain agents in a 100-200 lines


of code in any language or database that merely follows block
headers and can obtain raw blocks for ltering down transaction
outputs of interest. Anyone can start now and immediately be able
to achieve consensus and have a way to verify when a mistake was
made in the computation. There are numerous Bitcoin blockchain
service providers that can quickly return the block headers and raw
blocks, which operates on zero-trust

Sketch of SimpleAsset Non-


Fungible Token (NFT
We will brie y discuss a hhypothetic Non-Fungible Token (NFT) that
carries satoshi value that the creator fused into the token at time of
minting. The only way to spend this output, is to create another
output that carries the identical  satoshi value  and  original minting
txid along with it. Using OP_PUSH_TX we can trivially impose this
condition to guarantee exactly 1 output carries the state forwards
(ie: to the next or same owner, in the case of a state update)

By enforcing this constraint we basically enforce, at the script level,


the deletion of the spent token from memory. Without using satoshi
token value, an  off-chain  database must mark the entry or pointer
as deleted (ie: another index is needed above and beyond the mere
UTXO set). This is why it is crucial to use Satoshi value to update
token/computational state

When a user receives a token they receive the entire chain of


signatures back to the minting. The users wallet veri es the parent
signatures all match to provide 100% certainty of authenticity. An
fl
.

fi
)

fi
.

NFT with 10,000 transfers or state updates is only 1MB in total size
(off-chain data transfer). However if the owner of an NFT lost their
history, then a service provider can perform the lookup quickly,
return all 1MB worth of the 10,000 transfers so the current owner
has that information readily available from then on

We will demonstrate the SimpleAsset token in functional code and


examples in a future paper

Feature


Users can permissionlessly mint any smart contract or NFT

Users can permissionlessly transfer and even "melt" tokens
back to native cash satoshi
• Users can choose _how many satoshis to infuse into the NFT.
Giving it instant instrinsic value that must be passed along for
the entire lifetime of the token before being melted bac
• Users, businesses and Bitcoin miners themselves can track all
mutation hash committments trivially and show that all arrived
at the same state. (Optional
Performanc

• Zero overhead indexing above the required native UTXO se


• Capable of scaling to 1 GB blocks on a +60 mbps internet
connection and using only 5TB/month with (zero storage
overhead and zero extra indexes compared to the native
UTXO set
• Token can be transferred 100,000 and the off-chain history is
only 10MB. Each transfer adds about 1kb of history to the title.
Even a token with 1,000,000 transfers is still manageable at
100 MB
Security Guarantee
.

• A seller will always provide the complete title history, so the


buyer can inspect authenticity trustlessl
• Even if a seller does not have the title history, simply call a
blockchain service and pay them a small fee to obtain it quick
so that the new buyer then has it going forward (which they
can then pass onto the next new owner in the future if they are
so kind
• Buyer can perform a single SPV on the latest state, to know
with certainty that the entire history is valid
• Full Byzantine fault tolerance guarantees that are afforded to
satoshis at scale with zero indexing overhead and zero risk of
being fooled with un-authentic assets. Every smart contract/
NFT is enforced by the POW header chain and raw blocks
(Layer zero) and therefore all state can be arrived at
deterministically in the same way that a Bitcoin node itself
does now
How is this different from other L1, L2 solutions

Any non-base layer token will incur at least a 100% storage and
execution overhead because at minimum 1 extra fast KV store or
index is required (due to the fact that the 'value' eld is seperate in
the data). Anyone operating with  satoshis as the token value  will
have a sign cant cost reduction over any competitor that does not.
The operator using this new method will also have fast resolution of
con icts and be able to trivially prove to their customers that they
computed the correct state and users have 100% certainty of
authenticity that is independently veri able

Additionally, these SimpleAsset NFT's only require one SPV check


(the utxo "tip" only) at the point of inquiry to con rm ALL of the state
back to the beginning forever. This is not possible if the satoshis
fl
)

fi
fi
y

fi
fi
?

themselves are not used for carrying value because the Bitcoin
miner UTXO Map algorithm for managing spends does not concern
itself with anything _except the chain of digital signattures in the
satoshi inputs/outputs only*

Preliminary Result
Our preliminary results show that a home desktop laptop can easily
process and  index the UTXOs  of the 1.3M transactions in
block #635141 in about 60-90 seconds on just a single core of the
Intel i7 2.6ghz on a consumer grade NVMe SSD storage device
(non-raid). A user only needs a 5TB/month internet data plan and a
minimum connection speed of +20 mbps (over 100mbps is the
average in the United States as of 2020) to be able to track the
latest blockchain fast enough to kep up to date within ever 5-10
minutes (ie: less time than the next con rmation). Since the user is
only indexing the shard of the UTXO set (as described in an
algorithm below), then storage and indexing is negligible and will
grow strictly as a function of their own need for retention of that
data

We envision a world where "Businesses that receive frequent


payments will probably still want to run their own nodes for more
independent security and quicker veri cation". Even if all blocks
lled up 1GB with 3M transactions each forever starting now, then
every business could still easily obtain the immutable block
headers, download the latest block by blockhash and then set
a  single  core of their laptop to sync each block within about 2-3
minutes. They would only store the UTXO set and index for  their
wallets, business data and transactions only. This would be in some
fi
.

fi
fi
RDBMS or fast KV store for easy and ef cient access. We built one
internally and this paper is the culmination of that research and
development

Even 10GB blocks (30 million transactions) every 10 minutes is only


50TB of bandwidth a month and 99.9999% of that data is not
relevant to the business/user, so their index and storage size is
small and will scale easily using traditional web technologies.
Parallelize the code to 10 cpu cores and 10GB blocks can be
processed by a higher end desktop computer today in under a
minute or two

The result will be that disparate, cooperative and even competing


agents can "telepathically" communicate arbitrary application state
and be 100% absolutely sure there was no forgery nor double-
spends to the exact same degree as a mining node itself knows, the
computations' forward evolution will be actually backed by proof of
work because it's using the native satoshi token history itself. SPV
now also works as expected for application state, nally

Conclusio
This paper introduced a novel service framework "Bitcoin Agent"
and acts as an "on-chain" agent operating at zero overhead
compared to a Bitcoin mining node itself. This framework and
approach of using the satoshi token itself as the "state transition
mutex" that carries the value and computation forward now provides
application developers the ability to achieve consensus in a fully
reproducible, deterministic and tamper resistant manner
.

fi
fi
.

Additionally, we introduced a new Bitcoin digital asset protocol


"SimpleAsset", a non-fungible token speci cation, for the purposes
of demonstrating and analyzing the concept in greater detail

Our internal preliminary results show that Bitcoin Agents can easily
handle and blocks of sized 1-10 GB on a standard desktop
computer with at least +20 mbps internet connection and storage
just adequate for their own asset tracking. We have not even tested
using more than 1 CPU core, as there is no dif culty in processing 3
million transactions in under 3 minutes. Bitcoin fundamentally
scales because of the UTXO model and it is trivial to parallelize to
multi-core systems

It was  Satoshi's "vision" all along to have businesses and


users running their own indexer and store their own chains of
digital signatures. It never really hits a scaling ceiling
.

fi
fi
.

You might also like