So, your consensus mechanism must reduce, but cannot eliminate, forks
and therefore should not try. (There are a bunch of impossibility proofs,
which any proposed consensus mechanism must thread a very narrow path
between) and when a fork happens, as it likely very often will, has to
resolve that fork by everyone moving to the branch of the fork that has the
most support.
In proof of work consensus, you slow down the production of forks by
making everyone wade through molasses, and you can tell how much
support a fork has by how fast it advances through the molasses, so
everyone moves to the branch of the fork that has made it furthest through
the molasses.
But this turns out to limit the consensus bandwidth to about ten
transactions per second, though if the lightning network gets running as it
should, this may well suffice for quite a bit longer. (Lightning needs fixes
at the time of writing.)
Making everyone wade through molasses is just a bad idea. And it has
produced a network that is alarmingly centralized and vulnerable to state
power. There are very few big miners.
But you don’t want all the peers to report in on which fork they are on,
because too much data, which is the problem with all the dag consensus
systems. Proof of work produces a very efficient and compact form of
evidence of support for a fork.
## The solution, in overly simple outline form
For each peer that could be on the network, including those that have been
sleeping in a cold wallet for years, each peer keeps a running cumulative
total of that peers stake. With every new block, the peers stake is added to
its total for that block.
On each block of the chain, a peer’s rank is the bit position of the highest
bit of the running total that rolled over, up to a maximum.
So if Bob has a third of the stake of Carol, and $N$ is a rank that corresponds to bit position higher than the stake of either of them, then
Bob gets to be rank $N$ or higher one third as often as Carol. But even if
his stake is very low, he gets to be high rank every now and then.
Each peer gets to be rank $N+1$ half as often as he gets to be rank $N$, and he gets to be a rank higher than $N$ as often as he gets to be rank $N$.
Any group of two or more peers can propose a next block, and if lots of
groups do so, we have a fork. If a group of $m$ peers propose a block, and the
lowest rank of any member of the group is rank $N$, then the weight of the block is $2^N\ln(8m+9)$ Every correctly behaving peer will copy, circulate, and attempt to build on the highest weighted chain of blocks of which he is aware and will ignore all others. Correctly behaving peers will wait longer the lower their rank before attempting to participate in block formation, will wait longer before participating in attempting to form a low weighted block, and will not attempt to form a new block if a block of which they already have a copy would be higher weight. (Because they would abandon it immediately in favour of a block already formed.)
This produces the same effect as wading through molasses, without the heavy wading, because the chain with the highest ranking peers signing its blocks obviously has more support.
Fork production is limited, because there are not that many high ranking peers, because low ranking peers hold back for higher ranking peers to take care of block formation, and because forks are resolved in favour of the block of highest weight.
In the course of attempting to form a group, peers will find out what other high ranking peers are online, so, if we make the Hedera assumption that everyone always gets through eventually and that everyone knows who is online, there will only be one block formed, and it will be formed by the set of peers that can form the highest ranking block. Of course, that assumption I seriously doubt.
Suppose that two blocks of equal weight are produced. Well, then, we obviously have enough high ranking peers online and active to produce a higher weighted block, and some of them should do so, and if they don’t, chances are that the next block built on one block will have higher weight than the next block built on the other block.
When a peer signs the proposed block, he will attach a sequence number
to his signature. If a peer encounters two blocks at the same chain
position, a fork, that have the same peer signing it, (a peer can propose as
many blocks as it likes) he should discount the signature with the lower
sequence number, lowering the weight of the block with the lower
sequence number. If the two signatures have the same sequence number,
he discounts both signatures, lowering the weight of both blocks.
Another blockdag algorithm, but one whose performance has been tested. Can handle high bandwidth, lots of transactions, and achieves fast Byzantine fault resistant total order consensus in time $O(6λ)$, where λ is the upper bound of the network’s gossip period.
* [Blockchai–free cryptocurrencies: A framework for truly decentralised fast transactions](https://eprint.iacr.org/2016/871.pdf)
These transactions are indeed truly decentralized, fast, and free from
blocks, assuming all participants download the entire set of
transactions all the time.
The problem with this algorithm is that when the blockchain grows enormous, most participants will become clients, and only a few giant peers will keep the whole transaction set, and this system, because it does not provide a total order of all transactions, will then place all the power in the hands of the peers.
We would like the clients to have control of their private
keys, thus must publish their public keys with the money they
spend, in which case the giant peers must exchange blocks of
information containing those keys, and it is back to having blocks.
The defect of this proposal is that convergence does not
converge to a total order on all past transactions, but merely a total
set of all past transactions. Since the graph is a graph of
transactions, not blocks, double spends are simply excluded, so a
total order is not needed. While you can get by with a total set, a
total order enables you to do many things a total set does not let
you do. Such as publish two conflicting transactions and resolve them.
Total order can represent consensus decisions that total set cannot
easily represent, perhaps cannot represent at all. We need a
blockdag algorithm that gives us consensus on the total order of
blocks, not just the set of blocks.
In a total order, you do not just converge to the same set, you
converge to the same order of the set. Having the same total order
of the set makes makes it, among other things, a great deal easier
and faster to check that you have the same set. Plus your set can
contain double spends, which you are going to need if the clients
themselves can commit transactions through the peers, if the clients
themselves hold the secret keys and do not need to trust the peers.
# Proposed blockdag implementation
The specific details of many of these proposed systems are rather silly and
often vague, typical academic exercises unconcerned with real world
issues, but the general idea that the academics intend to illustrate is sound
and should work, certainly can be made to work. They need to be
understood as academic illustrations of the idea of the general algorithm
for fast and massive blockdag consensus, and not necessarily intended as
I propose proof of stake. The stake of a peer is not the stake it owns, but
the stake that it has injected into the blockchain on behalf of its clients
and that its clients have not spent yet. Each peer pays on behalf of its
clients for the amount of space it takes up on the blockchain, though it does
not pay in each block. It makes an advance payment that will cover many
transactions in many blocks. The money disappears, built in deflation,
instead of built in inflation. Each block is a record of what a peer has
injected
The system does not pay the peers for generating a total order of
transactions. Clients pay peers for injecting transactions. We want the
power to be in the hands of people who own the money, thus governance will
have a built in bias towards appreciation and deflation, rather than
inflation.
The special sauce that makes each proposed blockdag different from each
of the others is how each peer decides what consensus is forming about
the leftmost edge of the dag, the graph analysis that each peer performs.
And this, my special sauce, I will explain when I have something running.
Each peer adopts as its leftmost child for its latest block, a previous block
that looks like a good candidate for consensus, which looks like a good
candidate for consensus because the left child has a left child that looks
like consensus actually is forming around that grandchild , in part because
the left child has a … left child has a … left child that looks like it might
have consensus, until eventually, as new blocks pile on top of old blocks, we
actually do get consensus about the left most child sufficiently deep in
the dag from the latest blocks.
The blockdag can run fast because all the forks that are continually
forming eventually get stuffed into the consensus total order somewhere.
So we don’t have to impose a speed limit to prevent excessive forking.
# Cost of storage on the blockchain.
Tardigrade charges $120 per year for per terabyte of storage, $45 per terabyte of download
We have a pile of theory, though no practical experience, that a blockdag can approach the physical limits, that its limits are going to be bandwidth and storage..
Storage on the blockdag is going to cost more, because massively
replicated, so say three hundred times as much, and is going to be
optimized for tiny fragments of data while Tardigrade is optimized for
enormous blocks of data, so say three times as much on top of that, a
thousand times as expensive to store should be in the right ballpark.
When you download, you are downloading from only a single peer on the blockdag, but you are downloading tiny fragments dispersed over a large pile of data, so again, a thousand times as expensive to download sounds like it might be in the right ballpark.
Then storing a chain of keys and the accompanying roots of total state,
with one new key per day for ten years will cost about two dollars over ten
years.
Ten megabytes is a pretty big pile of human readable documentation. Let
us suppose you want to store ten megabytes of human readable data and
read and write access costs a thousand times what tardigrade costs, will
cost about twelve dollars.
So, we should consider the blockdag as an immutable store of arbitrary
typed data, a reliable broadcast channel, where some types are executable,
and, when executed, cause a change in mutable total state, typically that
a new unspent coin record is added, and an old unspent coin record is
deleted.
In another use, a valid update to a chain of signatures should cause a
change in the signature associated with a name, the association being
mutable state controlled by immutable data. Thus we can implement
corporations on the blockdag by a chain of signatures, each of which
represents [an n of m multisig](./PracticalLargeScaleDistributedKeyGeneration.pdf “Practical Large Scale Distributed Key Generation”).