12 KiB
12 KiB
title:
Immutable Append Only Data Structure
# katex
...
The primary chain is a chain of hashes of merkle links, with for each link we have a zeek proving that the entire merkle tree, including data that only one person has, also including long lost or discarded data that no one has any more, is valid. And each block in that chain includes a bunch of hashes of transactions, each of which is the most recent link in a child chain. So the primary chain has a bunch of other chains hanging off it. Each proof proves that the latest block is valid, and that the peer that produced this proof verified the previous proof.
We want wallet chains hanging off the primary chain, and child chains. A wallet chain needs no consensus, for the latest transaction is valid if the machine that knows the wallet secret signs it. Child chains operating by consensus will also hang off it, when we implement corporations and traded shares on the blockchain. Each child chain can have its own rules defining the validity of the latest transaction in that chain. For a child chain operating by consensus, as for example the market in a corporation's shares, the rule will be that a peer can incorporate the transaction that represents a block ancestral to his current block into the parent chain when it has enough weight piled on top it by subsequent blocks, and that every peer in the child chain, when it sees a fork in the child chain by blocks with different incorporation in the forks of the parent chain, has to prefer the branch of the child fork that is incorporated by the branch of the parent fork with the greatest weight.
Each wallet will be a merkle chain,that links to the consensus chain and is linked by the consensus chain, and it consists of a linear chain of transactions each, with a sequence number, each of which commits a non dust amount to a transaction output owned by the wallet - which only owns one such output, because all outputs by other people have to be committed into the destination wallet in a reasonable time.
A wallet chain has a single authority, but we could also have chains hanging off it that represent the consensus of smaller groups, for example the shareholders of a company with their shares in a merkle child chain, and their own reliable broadcast channel.
The reliable broadcast channel has to keep the complete set of hashes of the most recent transaction outputs of a wallet that commits more than a dust amount of money to the continuing wallet state and keep the path to those states around forever, but with the merkle chain structure envisaged, the path to those states will only grow logarithmically.
I discarded the data on infix and postfix positioning of parents, and the tree depicted does not allow us to reach an arbitrary item from a leaf node, but the final additional merkle vertex produced along with each leaf node does reach every item. At the cost of including a third hash for backtracking in some of the merkle vertexes.
I think I should redraw this by drawing different kinds of vertexes differently, the binary with a two fanged arrow head, the trinary with a three fanged arrow head, and the blocks as blocks.
The blocks one color, the intermediate vertexes another, and final vertex defining a block the final color and larger.
You really need a physical implementation that is a physically append only file of irregular sized merkle vertexes, plus a rule for finding the physical location of the item hashed, plus a collection of quick lookup files that contain only the upper vertexes. On the other hand, sqlite is already written, so one might well want to think of implementing this as an ever incrementing oid, which we can get around to making into a physically append only file another day. Stashing hashes of variable heights into a single table with an ever incrementing oid is the same problem as the physical file, except that sqlite allows variable length records. (and does that housekeeping for us.)
The data defined by hash is immutable but discardable.
One appends data by endlessly adding a new merkle vertex, which contains the hash of the previous vertex.
Each merkle vertex is associated with a block number, a height, an infix position equal to the block number, plus the block number with the hight bits zeroed and a one bit appended, the infix position, plus the postfix position, which is twice the block number minus the number of one bits in the infix position.
Some of them are two hash groups, and some of them are three hash groups, the number of additional hashes, the number of three hash groups, is equal to the block number, so the position in a physical flat file is equal to the postfix position times the size of a two hash group, plus the block number times the size of the additional hash plus a fixed size region for the common essential fixed sized data of a variable length block. The physical file is a collection of physical files where the name of each file represents its offset in the conceptual very large ever growing index file of fixed sized records, and its offset into the conceptual ever growing file of variable length records, so that simple concantenation of the files generates a valid file.
From time to time we aggregate them into a smaller number of larger files, with a binary or roughly fibonacci size distribution. We do not want aggregation to happen too often with files that are too big, as this screws up standard backup systems, which treat them as fresh files, but on the other hand, do not want an outrageously large number of very tiny files, as standard backup for very large numbers of very tiny files is also apt to be costly, and mapping an offset into a very large number of very tiny files also costly, so a background process aggregating the files, and then checking them, then after check fixing the mapping to map into the new bigger files, then deleting the now unreferenced small files. The rule that each file has to be at least half of the remaining data gives us the binary distribution, while a gentler rule, say at least a quarter, gives us a larger number of files, but less churn.
Every time a fresh tiny file is created, a background process is started to check the files, starting with the tiniest to see if it is time for aggregation. Terminates when the largest aggregation is done.
So that files sort in the correct order, we name them in base 36, with a suffix indicating whether they are the fixed length index records, or the variable sized records that the fixed length records point into.
Although we should make the files friendly for conventional backup for the sake of cold storage, we cannot rely on conventional backup mechanisms, because we have to always have to have very latest state securely backed up before committing it into the blockchain. Which is best done by having a set of remote Sqlite files.