wallet/docs/blockdag_consensus.md
reaction.la b4e3409fea
Changed a lot of files by amending
reliable broadcast channel to atomic broadcast channel

Noted that Tokio is a bad idea for async

Designed a C api async

Noted that my too clever by half sfinae code has been obsolted by concepts.
2024-07-28 19:12:36 +08:00

741 lines
41 KiB
Markdown
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
# katex
title: Blockdag Consensus
...
Not ready for publication. "stake" is currently used in a different sense on blockchains, and this describes
a system in which wallets are peers and peers are wallets, which is a pretty bad idea.
# The problem
For the reasons discussed, proof of work is incapable of handling a very large number of transactions per second. To replace fiat money, we need a consensus algorithm capable of a thousand times greater consensus bandwidth. There are plenty of consensus algorithms that can handle much higher consensus bandwidth, but they do not scale to large numbers of peers. They are usually implemented with a fixed number of peers, usually three peers, perhaps five, all of which have high reliability connections to each other in a single data centre.
In a decentralized open entry peer to peer network, you are apt to get a very large number of peers, which keep unpredictably appearing and disappearing and frequently have unreliable and slow connections.
Existing proof of share crypto currencies handle this by "staking" which is in practice under the rug centralization. They are not really a decentralized peer to peer network with open entry.
## The solution outlined
### A manageable number of peers forming consensus
In a decentralized peer to peer network, it is impossible to avoid forks, and
even with Practical Byzantine Fault Tolerant Consensus and Raft which
have a vast, complex, and ingenious mathematical structure to prove that
forks are impossible, and with a known and fixed very small number of
peers all inside a single data centre with very good network connections,
they wound up "optimizing" the algorithm to furtively allow forks through
the back door, to be resolved later, possibly much later, because getting a
fork free answer could sometimes take a very long time, and though the
network was theoretically always live, in that it would theoretically deliver
a definitive result eventually as long as $f+1$ non faulty peers were still
at it, “eventually” was sometimes longer than people were willing to wait.
Practical Byzantine Fault Tolerant Consensus is horrendously complicated
and hard to understand and becomes a lot more complicated when you
allow forks through the backdoor. Byzantine fault tolerant Raft consensus
is simpler, but in practice they wind up allowing backdoor forks anyway,
despite incredibly clever academic ingenuity to produce a system that they
could prove cannot fork, and when you have forks in the backdoor, Raft
becomes at least as complicated and difficult to understand as Practical
Byzantine Fault Tolerant Consensus, perhaps worse.
So, your consensus mechanism must reduce, but cannot eliminate, forks
and therefore should not try. (There are a bunch of impossibility proofs,
which any proposed consensus mechanism must thread a very narrow path
between) and when a fork happens, as it likely very often will, has to
resolve that fork by everyone eventually moving to the prong of the fork
that has the most support, as happens in Bitcoin proof of work. Which is
why Bitcoin proof of work scales to large numbers of peers.
In proof of work consensus, you slow down the production of forks by
making everyone wade through molasses, and you can tell how much
support a fork has by how fast it advances through the molasses, so
everyone moves to the branch of the fork that has made it furthest through
the molasses.
But this turns out to limit the consensus bandwidth to about ten
transactions per second, though if the lightning network gets running as it
should, this may well suffice for quite a bit longer. (Lightning needs fixes
at the time of writing.)
Making everyone wade through molasses is just a bad idea. And it has
produced a network that is alarmingly centralized and vulnerable to state
power. There are very few big miners.
But you dont want all the peers to report in on which fork they are on,
because too much data, which is the problem with all the dag consensus
systems. They all, like Practical Byzantine Fault Tolerant Consensus and
Byzantine Fault Tolerant Raft, of which they are variants, rely on on all
the peers telling all the other peers what branch they are on. Proof of work
produces a very efficient and compact form of evidence how much support
there is for a prong of a fork.
### Sampling the peers
So we have to sample the peers, or rather have each peer draw consensus
from the same representative sample.
For each peer that could be on the network, including those that have been
sleeping in a cold wallet for years, each peer keeps a running cumulative
total of that peers shares. With every new block, the peers shares is added to
its total.
On each block of the chain, a peers rank is the bit position of the highest
bit of the running total that rolled over when its shares was added for that
block.
*edit note*
Here I propose making the weight in any block $2^rank$, but perhaps a better rule is that exclusive or of the previous and new value of the running total is the weight, which obviates the need for multiple peers to sign on to resolve draws.
And also I propose a running limit on the rank. A better solution is that in the event of deep fork, where several blocks differ between the two branches of the fork, you prefer the branch that has the greatest median weight on all the blocks that differ multiplied by the total weight, rather than the total weight. If there are an even number of blocks, he takes the average of the two median weights. There is a limit on the number of blocks permitted since the alleged time on the last identical block. However a block with great block weight is allowed to be produces faster than a block with little block weight, so a higher weight branch can also have more total blocks.
Which gives the same outcome, that on average and over time, the total weight will reflect the total weight of peers online and actively participating, and the total weight of a branch of a deep fork will reflect the total weight of the peers on that fork, so that in the event of a long network bisection, the group that has the most peers is likely to win when the bisection is fixed.
*end edit note*
So if Bob has a third of the shares of Carol, and $N$ is a rank that
corresponds to bit position higher than the shares of either of them, then
Bob gets to be rank $R$ or higher one third as often as Carol. But even if
his he has a very small shareholding, he gets to be high rank every now and then.
A small group of the highest ranking peers get to decide on the next block,
and the likelihood of being a high ranking peer depends on shares.
They produce the next block by unanimous agreement and joint signature.
The group is small enough that this is likely to succeed, and if they do not,
some other group will. The other group possibly being the first group with
some members who had poor connectivity thrown out.
And then, if more than one such group produces a putative next block,
possibly several small and unrepresentative groups of not very high rank
then the consensus process that we will describe kicks in.
Each peer is going to use a consensus created by those few peers that are
high ranking at this block height of the blockchain, and since there are few
such peers, there will usually only be one such consensus.
Each peer gets to be rank $R+1$ half as often as he gets to be rank $N$,
and he gets to be a rank higher than $R$ as often as he gets to be rank $R$.
The algorithm for producing the next block ensures that if the Hedera assumptions are true, (that everyone of high rank knows what high ranking
peers are online, and everyone always gets through) a single next block
will be produced by the unanimous agreement of the highest ranked peers,
but we don't rely on this always being true. Sometimes, probably often,
there will be multiple next blocks, and entire sequences of blocks. In
which case each peer has to rank them and weight them and choose the
highest ranked and weighted block, or the highest weighted sequence of blocks.
A peer cannot tell the difference between network bisection and a whole lot
of peers being offline.
We have to keep producing a result when a whole lot of peers are offline,
rather than just stop working as Practical Byzantine Fault Resistant
Consensus and its Raft equivalent does. Which means that in a long lasting
network bisection, there is going to be fork which will have to be resolved
when the network recovers.
[running average]:./running_average.html
"how to calculate the running average"
{target="_blank"}
The rank of a block is rank of the lowest rank peer of the group forming the block, or one plus the [running average] of previous blocks rounded up to the nearest integer, whichever is less.
The weight of a block is $m*2^R$, where $m$ is the size of the group
forming the block and $R$ is the rank of the block.
If the number and weight of online peers is stable, there will be two or
more peers higher than the running average rank as often as there are less
than two peers of that rank or higher. So the next block will be typically
be formed by the consensus of two peers of the running average rank,
or close to that.
If a peer is of rank to form a block of higher or equal rank to any existing
block, and believes that there are enough peers online to form a block with
higher rank or the same rank and greater weight than any existing block, it
attempts to contact them and form such a block.
If the group forming a block contains two or more members capable of
forming a block with higher rank (because it also contains members of
lower rank than those members and lower than the running average plus
one), the group is invalid, and if it purportedly forms a block, the block is
invalid.
This rule is to prevent peers from conspiring to manufacture a fork with a falsely high weight under the rule for evaluating forks.
The intention is that in the common case, the highest ranked and weighted
block possible will be one that can and must be formed by very few peers,
so that most of the time, they likely succeed in forming the highest
possible weighted and ranked block. To the extent that the Hedera
assumptions hold true, they will succeed.
This does not describe how they form consensus. It describes how we get
the problem of forming consensus down to sufficiently few peers that it
becomes manageable, and how we deal with a group failing to form
consensus in a reasonable time. (Another group forms, possibly excluding
the worst behaved or least well connected members of the unsuccessful
group)
Unlike Paxos and and Raft, we don't need a consensus process for creating
the next block that is guaranteed to succeed eventually, which is important
for if one of the peers has connection problems, or is deliberately trying
to foul things up, "eventually" can take a very long time. Rather, should
one group fail, another group will succeed.
Correctly behaving peers will wait longer the lower their rank before attempting to participate in block formation, will wait longer before participating in an attempt to form a low weighted block, and will not attempt to form a new block if a block of which they already have a copy would be higher rank or weight. (Because they would abandon it immediately in favour of a block already formed.)
To ensure that forks and network bisections are resolved, the timeout before attempting to forming a new block increases exponentially with the difference between the proposed block and the running average, and inverse linearly with the running average.
High ranking peers try to form a new block first, and in the event of
network bisection, the smaller part of the bisection will on average have
fewer peers, therefore the highest ranking peers available will on average
be lower rank, so the smaller portion of the bisection will form new blocks
more slowly on average.
In the course of attempting to form a group, peers will find out what other high ranking peers are online, so, if the Hedera assumption (that everyone
always gets through eventually and that everyone knows who is online) is
true, or true enough, there will only be one block formed, and it will be
formed by the set of peers that can form the highest ranking block. Of course, the Hedera assumption is not always going to be true..
Suppose that two successor blocks of equal weight are produced: Well,
then we have enough high ranking peers online and active to produce a
higher weighted block, and some of them should do so. And if they fail to
do so, then we take the hash of the public keys of the peers forming the
block, their ranks, and the block height of the preceding block, and the
largest hash wins.
When a peer signs the proposed block, he will attach a sequence number
to his signature. If a peer encounters two inconsistent blocks that have the
same peer signing it, (a peer can propose as many blocks as it likes) he
should discount the signature with the lower sequence number, lowering the
weight of the block with the lower sequence number. If the two signatures
have the same sequence number, he discounts both signatures, lowering the weight of both blocks.
That is how we resolve two proposed successor blocks of the of the same
blockchain.
Fork production is limited, because there are not that many high ranking peers, and because low ranking peers hold back for higher ranking peers to take care of block formation,
But we are going to get forks anyway. Not often, but sometimes. I lack
confidence in the Hedera assumptions.
What do we do if we have a fork, and several blocks have been created on
one or both prongs of the fork?
Then we calculate the weight of the prong:
$$\displaystyle\sum\limits_{i}{\large 2^{R_i}}$$
Where $i$ is a peer that has formed blocks on that prong, and R_i$ is the
rank of the his signature with the highest sequence number on either prong.
If he has signed a block in one prong, and a block in the other prong, only
the signature with highest sequence number counts. If two signatures have
equal sequence number , or if the difference in sequence numbers is out of
range, or if a signature has an out of order sequence number within a chain
of blocks following the fork,neither signature counts. A correctly behaving
peer should assign sequence numbers sequentially, but nothing enforces
this, and the system needs to be able to produce a sensible result even if
some peers maliciously or through failure do not generate sequential
signature sequence numbers.
Which, in the event of a fork, will on average reflect the total shares of
peers on that fork.
If two prongs have the same weight, take the prong with the most transactions. If they have the same weight and the same number of transactions, hash all the public keys of the signatories that formed the
blocks, their ranks, and the block height of the root of the fork and take
the prong with the largest hash.
This value, the weight of a prong of the fork will, over time for large deep
forks, approximate the shares of the peers online on that prong, without the
painful cost taking a poll of all peers online, and without the considerable
risk that that poll will be jammed by hostile parties.
Each correctly behaving peer will circulate the highest weighted prong of the fork of which it is aware, ignore all lower weighted prongs, and only attempt to create successor blocks for the highest weighted prong.
To add a block to the chain requires a certain minimum time, and the
lower the rank of the group forming that block, the longer that time. If a
peer sees new block that appears to it to be unreasonably and improperly
early, or a fork prong with more blocks than it could have, it will ignore it.
Every correctly behaving peer will copy, circulate, and attempt to build on the highest weighted chain of blocks of which he is aware and will ignore all others.
This produces the same effect as wading through molasses, without the
heavy wading, because the chain with the most numerous and highest
ranking peers signing its blocks obviously has more support, just as in
Bitcoin, more wading through molasses indicates more support.
# Hedera, Bitcoin Proof of Work, and Paxos
## Paxos
All consensus algorithms that work are equivalent to Paxos.
All consensus algorithms that continue to work despite Byzantine Fault
and Brigading are equivalent to Byzantine Fault Tolerant Paxos.
But Paxos is not in fact an algorithm. It rather is an idea that underlies
actual useful algorithms, and in so far as it is described as algorithm, it is
wrong, for the algorithm as described describes many different things that
you are unlikely to be interested in doing, or even comprehending, and the
algorithm as described is incapable of doing all sorts of things that you are
likely to need done. Even worse it is totally specific to one particular
common use case, which it studiously avoids mentioning, and does not
mention any of the things that you actually need to couple it in to this
specific case, making the description utterly mysterious, because the
writer has all the specific details of this common case in mind, but is carefully avoiding any mention of what he has in mind. These things are
out of scope of the algorithm as given in the interests of maximum
generality, but the algorithm as given is not in fact very general and makes
no sense and is no use without them.
The algorithm, and algorithms like it and closely related to it do not scale
to large numbers of peers, but understanding how and why it works is
useful to understanding how to create an algorithm that will work.
Despite the studious effort to be as generic as possible by omitting all of
the details required to make it actually do anything useful, the algorithm as
given is the simplest and most minimal example of the concept,
implementing one specific form of Paxos in one specific way, and as
given, will very likely not accomplish you need to do.
Paxos assumes that each peer knows exactly how many peers there should
be, though some of them may be permanently or temporarily unresponsive
or permanently or temporarily out of contact.
In Paxos, every peer repeatedly sends messages to every other peer, and
every peer keeps track of those messages, which if you have a lot of peers
adds up to a lot of overhead.
Hedera assumes that each peer knows exactly how many peers there
should be, *and that each peer eventually gets through*.
Which is a much stronger assumption than that made by Paxos or Bitcoin.
In Hedera, each peer's state eventually becomes known to every other
peer, even though it does not necessarily communicate directly with every
other peer, which if you have a whole lot of peers still adds up to a whole
lot of overhead, though not as much as Paxos. It can handle more peers
than Paxos, but if too many peers, still going to bite.
A blockdag algorithm such as Hedera functions by in effect forking all the
time, and resolving those forks very fast, but if you have almost as many
forks as you have peers, resolving all those forks is still going to require
receiving a great deal of data, processing a great deal of data, and sending
a great deal of data.
Hedera and Paxos can handle a whole lot of transactions very fast, but
they cannot reach consensus among a very large number of peers in a
reasonable time.
Bitcoin does not know or care how many peers there are, though it does
know and care roughly how much hashing power there is, but this is
roughly guesstimated over time, over a long time, over a very long time,
over a very very long time. It does not need to know exactly how much
hashing power there is at any one time.
If there are a very large number of peers, this only slows Bitcoin
consensus time down logarithmically, not linearly, while the amount of
data per round that any one peer has to handle under Hedera is roughly
$\bigcirc\big(N\log(N)\big)$ where N is the number of peers. Bitcoin can handle an
astronomically large number of peers, unlike Hedera and Paxos, because
Bitcoin does not attempt to produce a definitive, known and well defined
consensus. It just provides a plausible guess of the current consensus, and
over time you get exponentially greater certainty about the long past
consensuses. No peer ever knows the current consensus for sure, it just
operates on the recent best guess of its immediate neighbours in the
network of what the recent consensus likely is. If it is wrong, it eventually
finds out.
## Equivalence of Proof of Work and Paxos
Bitcoin is of course equivalent to Byzantine Fault Tolerant Paxos, but I
compare it to Paxos because Paxos is difficult to understand, and Byzantine
Fault Tolerant Paxos is nigh incomprehensible.
In Paxos, before a peer suggests a value to its peers, it must obtain
permission from a majority of peers for that suggestion. And when it seeks
permission from each peer, it learns if a value has already been accepted
by that peer. If so, it has to accept that value, only propose that value in
future, and never propose a different value. Which if everyone always gets
through, means that the first time someone proposes a value, that value,
being the first his peers have seen, will be accepted by someone, if only by
that peer himself.
Paxos is effect a method for figuring out who was "first", in an
environment where, due to network delays and lost packets, it is difficult
to figure out, or even define, who was first. But if most packets mostly get
through quickly enough, the peer that was first by clock time will usually
get his way. Similarly Bitcoin, the first miner to construct a valid block at
block height $N$ usually winds up defining the consensus for the block at
block height $N$.
This permission functionality of Paxos is equivalent to the gossip process
in Bitcoin, where a peer learns what the current block height is, and seeks
to add another block, rather than attempt to replace an existing block.
In Paxos, once one peer accepts one value, it will eventually become the
consensus value, assuming that everyone eventually gets through and that
the usual network problems do not foul things up. Thus Paxos can provide
a definitive result eventually, while the results of Bitcoin type consensus are never definitive, merely exponentially probable.
In Paxos, a peer learns of the definitive and final consensus when it
discovers that a majority of peers have accepted one value. Which if
several values are in play can take a while, but eventually it is going to
happen. In Bitcoin, when the blockchain forks, eventually more hashing
power piles on one branch of the fork than the other, and eventually
everyone can see that more hashing power has piled on one fork than the
other, but there is no moment when a peer discovers than one branch is
definitive and final. It just finds that one branch is becoming more and
more likely, and all the other branches less and less likely.
Thus Paxos type algorithms (such as Paxos, Raft, and their Byzantine fault
tolerant equivalents) have a stronger liveness property than bitcoin type
algorithms, but this difference is in practice not important, for paxos may
take an indefinitely long time before it can report a definite and final
consensus, while a Bitcoin like consensus algorithm takes a fairly definite
time to report it is nearly certain about the consensus value and that value
is unlikely to change.
Further, in actual practice, particularly during a view change, Paxos and Raft are frequently in a state where a peer knows that one view is overwhelmingly likely to become final, and another view highly unlikely, but the state takes a while to finalize. And the client is waiting. So it winds up reporting a provisional result to the client.
Thus in practice, the difference between the liveness condition of Paxos
type algorithms, which eventually provide a definitive proof that
consensus has been achieved, is not usefully stronger than the weaker liveness condition of consensus algorithms that merely provide an ever
growing likelihood that consensus has been achieved, and instead of
telling the client that consensus has been achieved or not, merely tell the
client how strong the evidence is that consensus has been achieved.
And in practice, Byzantine Fault Tolerant Paxos like algorithms tend to be
"optimized" by behaving like bitcoin type algorithms, making their
implementation considerably more complicated.
Crypto currency consensus algorithms derived from Paxos equivalent
consensus fail to scale, because they actually have to find out that a
majority of peers are on the same consensus. And there could be a lot of
peers, so this is a lot of data, and it is quite easy to maliciously attack the
network in ways to make this data difficult to obtain, if someone found it
convenient to stall a transaction from going through. "The cheque is in the mail."
# Bitcoin does not scale to competing with fiat currency
Bitcoin is limited to ten transactions per second. Credit card networks
handle about ten thousand transactions per second.
We will need a crypto coin that enables seven billion people to buy a lollipop.
Blockdag consensus can achieve sufficient speed.
There are thirty or more proposed blockdag systems, and the number grows rapidly.
While blockdags can handle very large numbers of transactions, it is not
obvious to me that any of the existing blockdag algorithms can handle
very large numbers of peers. When actually implemented, they always
wind up privileging a small number of special peers, resulting in hidden
centralization, as somehow these special and privileged peers all seem to
be in the same data centre as the organization operating the blockchain.
Cardano has a very clever, too clever by half, algorithm to generate
random numbers known to everyone and unpredictable and uncontrollable
by anyone, with which to distribute specialness fairly and uniformly over
time, but this algorithm runs in one centre, rather than using speed of light
delay based fair randomness algorithms, which makes me wonder if it is
distributing specialness fairly, or operating at all.
I have become inclined to believe that there is no way around making
some peers special, but we need to distribute the specialness fairly and
uniformly, so that every peer get his turn being special at a certain block
height, with the proportion of block heights at which he is special being
proportional to his shares.
If the number of peers that have a special role in forming the next block is
very small, and the selection and organization of those peers is not
furtively centralized to make sure that only one such group forms, but
rather organized directly those special peers themselves we wind up with
forks sometimes, I hope infrequently, because the special peers should
most of the time successfully self organize into a single group that
contains almost all of the most special peers. If however, we have another,
somewhat larger group of peers that have a special role in deciding which
branch of the fork is the most popular, two phase blockdag, I think we can
preserve blockdag speed without blockdag de-facto concentration of power.
The algorithm will only have bitcoin liveness, rather than paxos liveness,
which is the liveness most blockdag algorithms seek to achieve.
I will have to test this empirically, because it is hard to predict, or even to
comprehend, limits on consensus bandwidth.
## Bitcoin is limited by its consensus bandwidth
Not by its network bandwidth.
Bitcoin makes the miners wade through molasses. Very thick molasses.
That is what proof of work is. If there is a fork, it discovers consensus by
noticing which fork has made the most progress through the molasses.
This takes a while. And if there are more forks, it takes longer. To slow
down the rate of forks, it makes the molasses thicker. If the molasses is
thicker, this slows down fork formation more than it slows down the
resolution of forks. It needs to keep the rate of new blocks down slow
enough that a miner usually discovers the most recent block before it
attempts to add a new block. And if a miner does add a new block at
roughly the same time as another miner adds a new block, quite a few
more blocks have to be added before the fork is resolved. And as the
blocks get bigger, it takes longer for them to circulate. So bigger blocks
need thicker molasses. If forks form faster than they can be resolved, no
consensus.
## The network bandwidth limit
The net bandwidth limit on adding transactions is not a problem.
What bites every blockchain is consensus bandwidth limit, how fast all the
peers can agree on the total order of transactions, when transactions are
coming in fast.
Suppose a typical transaction consists to two input coins, a change output
coin, and the actual payment. (I use the term coin to refer to transaction
inputs and outputs, although they dont come in any fixed denominations
except as part of anti tracking measures)
Each output coin consists of payment amount, suppose around sixty four bits,
and a public key, two hundred and fifty six bits. It also has a script
reference on any special conditions as to what constitutes a valid spend,
which might have a lot of long arguments, but it generally will not, so the
script reference will normally be one byte.
The input coins can be a hash reference to a coin in the consensus
blockchain, two fifty six bits, or they can be a reference by total order
within the blockchain, sixty four bits.
We can use a Schnorr group signature, which is five hundred and twelve
bits no matter how many coins are being signed, no matter how many
people are signing, and no matter if it is an n of m signature.
So a typical transaction, assuming we have a good compact representation
of transactions, should be around 1680 bits, maybe less.
At scale you inevitably have a large number of clients and a small number
of full peers. Say several hundred peers, a few billion clients, most of them
lightning gateways. So we can assume every peer has a good connection.
A typical, moderately good, home connection is thirty Mbps download but
its upload connection is only ten Mbs or so.
So if our peers are typical decent home connections, and they will be a lot
better than that, bandwidth limits them to adding transactions at 10Mbps,
six thousand transactions per second, Visa card magnitude. Though if such
a large number of transactions are coming in so fast, blockchain storage
requirements will be very large, around 24 TiB, about three or four
standard home desktop system disk drives. But by the time we get to that
scale all peers will be expensive dedicated systems, rather than a
background process using its owners spare storage and spare bandwidth,
running on the same desktop that its owner uses to
shop at Amazon.
Which if everyone in the world is buying their lollipops on the blockchain
will still need most people using the lightning network layer, rather than
the blockchain layer, but everyone will still routinely access the blockchain
layer directly, thus ensuring that problems with their lightning
gateways are resolved by a peer they can choose, rather than resolved by
their lightning network wallet provider, thus ensuring that we can have a
truly decentralized lightning network.
We will not necessarily *get* a truly decentralized lightning layer, but a base
layer capable of handling a lot of transactions makes it physically possible.
So if bandwidth is not a problem, why is bitcoin so slow?
The bottleneck in bitcoin is that to avoid too many forks, which waste time
with fork resolution, you need a fair bit of consensus on the previous block
before you form the next block.
And bitcoin consensus is slow, because the way a fork is resolved is that
blocks that received one branch fork first continue to work on that branch,
while blocks that received the other branch first continue to work on that
branch, until one branch gets ahead of the other branch, whereupon the
leading branch spreads rapidly through the peers. With proof of share, that
is not going work, one can lengthen a branch as fast as you please. Instead,
each branch has to be accompanied by evidence of the weight of shares of
peers on that branch. Which means the winning branch can start spreading
immediately.
# Blockdag to the rescue
On a blockdag, you dont need a fair bit of consensus on the previous
block to avoid too many forks forming. Every peer is continually forming
his own fork, and these forks reach consensus about their left great grand
child, or left great great … great grandchild. The blocks that eventually
become the consensus as leftmost blocks form a blockchain. So we can
roll right ahead, and groups of blocks that deviate from the consensus,
which is all of them but one, eventually get included, but later in the total
order than they initially thought they were.
In a blockdag, each block has several children, instead of just one. Total
order starting from any one block is depth first search. The left blocks
come before the right blocks, and the child blocks come before the parent
block. Each block may be referenced by several different parent blocks, but
only the first reference in the total order matters.
Each leftmost block defines the total order of all previous blocks, the
total order being the dag in depth first order.
Each peer disagrees with all the other peers about the total order of recent
blocks and recent transactions, each is its own fork, but they all agree
about the total order of older blocks and older transactions.
## previous work
[There are umpteen proposals for blockdags](./SoK_Diving_into_DAG-based_Blockchain_Systems) most of them garbage, but the general principle is sound.
For a bunch of algorithms that plausibly claim to approach the upload
limit, see:
* [Scalable and probabilistic leaderless bft consensus through metastability](https://files.avalabs.org/papers/consensus.pdf)
This explains the underlying concept, that a peer looks at the dag,
make its best guess as to which way consensus is going, and joins
the seeming consensus, which make it more likely to become the
actual consensus.
Which is a good way of making arbitrary choices where it does not
matter which choice everyone makes, provided that they all make
the same choice, even though it is an utterly disastrous way of
making choices where the choice matters.
This uses an algorithm that rewards fast mixing peers by making
their blocks appear earlier in the total order. This algorithm does
not look incentive compatible to me. It looks to me that if all the
peers are using that algorithm, then any one peer has an incentive
to use a slightly different algorithm.
The authors use the term Byzantine fault incorrectly, referring to
behavior that suggests the unpredictable failures of an unreliable
data network as Byzantine failure. No, a Byzantine fault suggests
Byzantine defection, treachery, and failure to follow process. It is
named after Byzantium because of the stuff that happened during
the decline of the Byzantine empire.
* [Prism: Deconstructing the blockchain to approach physical limits](https://arxiv.org/pdf/1810.08092.pdf)
A messy, unclear, and overly complicated proposed implementation
of the blockdag algorithm, which, however, makes the important
point that it can go mighty fast, that the physical limits on
consensus are bandwidth, storage, and communication delay, and
that we can approach these limits.
* [Blockmania: from block dags to consensus](https://arxiv.org/pdf/1809.01620.pdf)
This brings the important concept, that the tree structure created by
gossiping the blockdag around _is_ the blockdag, and also is the data
you need to create consensus, bringing together things that were
separate in Prism, radically simplifying what is complicated in
Prism by uniting data and functionality that Prism divided.
This study shows that the Blockmania implementation of the
blockdag is equivalent to the Practical Byzantine Fault Tolerant
consensus algorithm, only a great deal faster, more efficient, and
considerably easier to understand.
The Practical Byzantine Fault Tolerant consensus algorithm is an
implementation of the Paxos protocol in the presence of Byzantine
faults, and the Paxos protocol is already hard enough to understand.
So anyone who wants to implement consensus in a system where
Byzantine failure and Byzantine defection is possible should forget
about Paxos, and study blockdags.
* [A highly scalable, decentralized dagbased consensus algorithm](https://eprint.iacr.org/2018/1112.pdf)
Another blockdag algorithm, but one whose performance has been tested. Can handle high bandwidth, lots of transactions, and achieves fast Byzantine fault resistant total order consensus in time $O(6λ)$, where λ is the upper bound of the networks gossip period.
* [Blockchaifree cryptocurrencies: A framework for truly decentralised fast transactions](https://eprint.iacr.org/2016/871.pdf)
These transactions are indeed truly decentralized, fast, and free from
blocks, assuming all participants download the entire set of
transactions all the time.
The problem with this algorithm is that when the blockchain grows enormous, most participants will become clients, and only a few giant peers will keep the whole transaction set, and this system, because it does not provide a total order of all transactions, will then place all the power in the hands of the peers.
We would like the clients to have control of their private
keys, thus must publish their public keys with the money they
spend, in which case the giant peers must exchange blocks of
information containing those keys, and it is back to having blocks.
The defect of this proposal is that convergence does not
converge to a total order on all past transactions, but merely a total
set of all past transactions. Since the graph is a graph of
transactions, not blocks, double spends are simply excluded, so a
total order is not needed. While you can get by with a total set, a
total order enables you to do many things a total set does not let
you do. Such as publish two conflicting transactions and resolve them.
Total order can represent consensus decisions that total set cannot
easily represent, perhaps cannot represent at all. We need a
blockdag algorithm that gives us consensus on the total order of
blocks, not just the set of blocks.
In a total order, you do not just converge to the same set, you
converge to the same order of the set. Having the same total order
of the set makes makes it, among other things, a great deal easier
and faster to check that you have the same set. Plus your set can
contain double spends, which you are going to need if the clients
themselves can commit transactions through the peers, if the clients
themselves hold the secret keys and do not need to trust the peers.
# Calculating the shares represented by a peer
We intend that peers will hold no valuable or lasting secrets, that all the
value and the power will be in client wallets, and the client wallets with
most of the value, who should have most of the power, will seldom be online.
I propose proof of share. The shares of a peer is not the shares it owns, but
the shares that it has injected into the blockchain on behalf of its clients
and that its clients have not spent yet, or shares that some client wallet
somewhere has chosen to be represented by that peer. Likely only the
whales will make a deliberate and conscious decision to have their shares
represented by a peer, and it will be a peer that they likely control, or that
someone they have some relationship with controls, but not necessarily a
peer that they use for transactions.
Each peer pays on behalf of its clients for the amount of space it takes up
on the blockchain, though it does not pay in each block. It makes an
advance payment that will cover many transactions in many blocks. The
money disappears, built in deflation, instead of built in inflation. Each
block is a record of what a peer has injected
The system does not pay the peers for generating a total order of
transactions. Clients pay peers for injecting transactions. We want the
power to be in the hands of people who own the money, thus governance will
have a built in bias towards appreciation and deflation, rather than
inflation.
# Cost of storage on the blockchain.
Tardigrade charges $120 per year for per terabyte of storage, $45 per terabyte of download
We have a pile of theory, though no practical experience, that a blockdag can approach the physical limits, that its limits are going to be bandwidth and storage..
Storage on the blockdag is going to cost more, because massively
replicated, so say three hundred times as much, and is going to be
optimized for tiny fragments of data while Tardigrade is optimized for
enormous blocks of data, so say three times as much on top of that, a
thousand times as expensive to store should be in the right ballpark. Maybe ten thousand.
When you download, you are downloading from only a single peer on the blockdag, but you are downloading tiny fragments dispersed over a large pile of data, so again, a thousand times as expensive to download sounds like it might be in the right ballpark.
Then storing a chain of keys and the accompanying roots of total state,
with one new key per day for ten years will cost about two dollars over ten
years.
Ten megabytes is a pretty big pile of human readable documentation. Let
us suppose you want to store ten megabytes of human readable data and
read and write access costs a thousand times what tardigrade costs, will
cost about twelve dollars.
So, we should consider the blockdag as an immutable store of arbitrary
typed data, a total order broadcast channel, where some types are executable,
and, when executed, cause a change in mutable total state, typically that
a new unspent coin record is added, and an old unspent coin record is
deleted.
A thousand times as expensive turns out to be quite economical.