1
0
forked from cheng/wallet
wallet/docs/manifesto/scalability.md
reaction.la b4e3409fea
Changed a lot of files by amending
reliable broadcast channel to atomic broadcast channel

Noted that Tokio is a bad idea for async

Designed a C api async

Noted that my too clever by half sfinae code has been obsolted by concepts.
2024-07-28 19:12:36 +08:00

34 KiB
Raw Permalink Blame History

# katex title: >- Scalable and private blockchain sidebar: true notmine: false abstract: >- Bitcoin does not scale to the required size. The Bitcoin total order broadcast channel is a massively replicated public ledger of every transaction that ever there was, each of which has to be evaluated for correctness by every full peer. With recursive snarks, we can now instead have a massively replicated public SQL index of private ledgers. Such a blockchain with as many transactions as bitcoin, will, after running for as long as Bitcoin, only occupy a few dozen megabytes of disk storage, rather than near a terabyte, and each peer and client wallet only has to evaluate the root recursive snark to prove the validity of every transaction that ever there was, including all those lost in the mists of time. misc_links: >- How to make Bitcoin scalable and private
How to make lightning usable by normies

Scaling, privacy, and recursive snarks

Bitcoin does not not scale because it is a massively replicated public ledger. Thus any real solution means making the ledger not massively replicated. Which means either centralization, a central bank digital currency, which is the path Ethereum is walking, or privacy.

You cure both blockchain bloat and blockchain analysis by not putting the data on the total order broadcast channel in the first place, rather than doing what Monero does, putting it on the blockchain in cleverly encrypted form, bloating the blockchain with chaff intended to obfuscate against blockchain analysis.

Pre-requisites

This explanation is going to require you to know what a graph, vertex, edge, root, and leaf is, what a directed acyclic graph (dag) is, what a hash is, what a blockchain is, and how hashes make blockchains possible. And what an SQL index is and what it does, and what a primary SQL index is and what it does. You need to know what a transaction output is in the context of blockchains, and what an unspent transaction output (utxo) is. Other terms will be briefly and cryptically explained as necessary.

Some brief and cryptic explanations of the technology

I have for some time remarked that recursive snarks make a fully private, fully scalable, currency, possible. But it seems this was not obvious to everyone, and I see recursive snarks being applied in complicated convoluted stupid ways that fail to utilize their enormous potential. This is in part malicious, the enemy pouring mud into the tech waters. So I need to explain.

recursive snarks, zk-snarks, and zk-starks

A zk-snark or a zk-stark proves that someone knows something, knows a pile of data that has certain properties, without revealing that pile of data.

The prover produces a proof that for a given computation he knows an input such that after a correct execution of the computation he obtains a certain public output - the public output typically being a hash of a transaction, and certain facts about the transaction. The verifier can verify this without knowing the transaction, and the verification takes roughly constant time even if the prover is proving something about an enormous computation, an enormous number of transactions.

To use a transaction output as the input to another transaction we need a proof that this output was committed on the public broadcast channel of the blockchain to this transaction and no other, and a proof that this output was itself an output from a transaction whose inputs were committed to that transaction and no other, and that the inputs and outputs of that transaction balanced.

So the proof has to recursively prove that all the transactions that are ancestors of this transaction output were valid all the way back to the beginning of the blockchain.

You can prove an arbitrarily large amount of data with an approximately constant sized recursive snark. So you can verify in a quite short time that someone proved something enormous (proved something for every transaction in the blockchain) with a quite small amount of data.

A recursive snark is a zk-snark that proves that the person who created it has verified a zk-stark that proves that someone has verified a zk-snark that proves that someone has verified …

So every time you perform a transaction, you don't have to prove all the previous transactions and generate a zk-snark verifying that you proved it. You have to prove that you verified the recursive snark that proved the validity of the inputs transaction outputs that you are spending. Which you do by proving that the inputs are part of the merkle tree of unspent transaction outputs, of which the current root of the blockchain is the root hash.

structs

A struct is simply some binary data laid out in well known and agreed format. Almost the same thing as an SQL row, except that an SQL row does not have a well known and agreed binary format, so does not have a well defined hash, and a struct is not necessarily part of an SQL table, though obviously you can put a bunch of structs of the same type in an SQL table, and represent an SQL table as a bunch of structs, plus at least one primary index. An SQL table is equivalent to a pile of structs, plus one primary index of those structs.

merkle graphs and merkle trees

A merkle graph is a directed acyclic graph whose vertices are structs containing hashes. It corresponds to a key value store, whose keys are random two fifty six bit numbers.

A merkle vertex is a struct containing hashes. The hashes, merkle edges, are the edges of the graph. So using recursive snarks over a merkle graph, each vertex has a proof that proved that its data was valid, given that the vertices that its edges point to were valid, and that the peer that created the recursive snark of that vertex verified the recursive snarks of the vertices that the outgoing edges (hashes) of this vertex points to.

So, you have a merkle chain of blocks, each block containing a merkle patricia tree of merkle dags. You have a recursive snark that proves the chain, and everything in it, is valid (no one created tokens out of thin air, each transaction merely moved the ownership of tokens) And then you prove that the new block is valid, given that rest of the chain was valid, and produce a recursive snark that the new block, which chains to the previous block, is valid.

total order broadcast channel

If you publish information on a total order broadcast channel, everyone who looks at the channel is guaranteed to see it and to see the same thing, and if someone did not get the information that you were supposed to send over the channel, it is his fault, not yours. You can prove you performed the protocol correctly.

A blockchain is a merkle chain and a total order broadcast channel. In Bitcoin, the total order broadcast channel contains the entire merkle chain, which obviously does not scale, and suffers from a massive lack of privacy, so we have to introduce the obscure cryptographic terminology "total order broadcast channel" to draw a distinction that does not exist in Bitcoin. In Bitcoin the merkle vertices are very large, each block is a single huge merkle vertex, and each block lives forever on an ever growing public broadcast channel. It is impractical to produce a recursive snark over such huge vertices, and attempting to do so results in centralization, with the recursive snarks being created in a few huge data centers, which is what is happening with Ethereum's use of recursive snarks. So we need to structure the data as large dag of small merkle vertices, with all the paths through the dag for which we need to generate proofs being logarithmic in the size of the contents of the total order broadcast channel and the height of the blockchain.

scaling the total order broadcast channel to billions of peers,exabytes of data, and terabytes per second of bandwidth

At scale, which is not going to happen for a long time, the total order broadcast channel will work much like bittorrent. There are millions of torrents in bittorrent, each torrent is shared between many bittorrent peers, each bittorrent peer shares many different torrents, but it only shares a handfull of all of the millions of torrents.

Each shard of the entire enormous total order broadcast channel will be something like one torrent of many in bittorrent, except that torrents in bittorent are immutable, while shards will continually agree on a new state, which is a valid state given the previous state of that shard and all the other shards and the peers sharing a shard will generate a recursive proof that the new state is valid, in that for each transaction causing a change in state, the peer that wants that transaction generates a recursive proof of knowledge of valid transactions going all the way back to the genesis block which justifies that change of state as valid. and all these proofs are recursively incorporated into a new proof that the shard is valid, which eventually gets incorprated into a single proof over all shards. Which peers sharing other shards will use when they in turn generate new state in those other shards.

If Bob wants to pay Carol on layer one, he constructs a proof that the transaction is valid, the transaction gets incorporated into the state of a shard, or several shards that he is sharing, the proof gets incorporated into the proof that the shard is valid, he sends that proof and transaction to Carol, who can subsequently use that proof, that transaction, and that transaction output (a transaction output that can only be used to generate a transaction in a shard that Carol is sharing, but Bob is not necessarily sharing). Which transaction that Carol later generates will likely make payment to Dave in yet another shard that neither she nor Bob is sharing.

When Bob generates a transaction, the inputs must belong to some of the few shards that he is sharing, but the outputs can belong to any shard.

He generates the proof of knowledge using only data available in the shards he is sharing, but the proof, being recursive, shows that someone in some shard knew each transaction leading to this one was valid, including zetabytes of transactions now long lost in the mists of time, all the way back to the genesis block.

It will be quite a while before we need more than one shard, but with recursive proofs of knowledge, we can easily have an enormous number of mutable shards, as bittorrent has an enormous number of immutable torrents.

Each shard will not have to store a very large amount of data. Sharding is likely to be forced by transaction volume and bandwidth limits, by data processing rather than data storage. Unless someone is using the blockchain as a publishing mechanism, and wants to keep a vast pile of data around and available, which is a perfectly valid and acceptable use, indeed an intended use since we intend to use the blockchain as a free speech mechanism, though when there is a lot of very old speech it will probably only be accepted in some shards and not others.

Some shards will be primarily be blogs, or a community of many blogs, except that comments in those blogs can pay money or receive money, as in Nostr, and at first the only use of shards will be to publish the equivalent of blogs or social media feeds.

Outline of scalable, shardable, algorithm

To create a transaction in a blockchain based on recursive snarks, the peer injecting the transaction creates a patricia merkle tree of inputs and outputs, with a proof that inputs equal the outputs, and the outputs were valid as of block N.

This gets merged with the Patricia merkle tree of another transaction, and another, and another, generating one big transaction of a whole pile of transactions, which giant merged transaction becomes block N+1 of the blockchain.

To prevent double spending, each input is associated with hash of itself and its original transaction, So that different inputs of the same transaction are associated with different hashes, and the same input to two different transactions (double spend) is associated with different hashes.

When merging transactions, each input can only be associated with one such hash. If there are some inputs around that are associated with more than one such hash, they get blacklisted and no one merges with a tree containing them, so that that proposal for block N+1 is ignored and forgotten.

Before we generate a merkle patricial tree of one giant merged transaction, we generate a total order over all transaction inputs and a patricia merkle tree of that ensures that the hash of each input is associated with only one hash of input plus transaction, so that we can spot double spends before doing too much work merging. collisons get added to a double spend blacklist, and do not get merged.

If there are several incompatible proposed versions of block N+1, Nakomoto consensus wins. The true block N+1 of several alternatives is whichever alternative block N+2 gets built on.

For sharding to work, it must be possible to merge in parallel, generate a proof for a shard of the tree that only includes the preimage of the hash for a certain address range of inputs and outputs, while only having the hashes for outside of that address range.

For greater privacy, one can merge the inputs first, and add the outputs later, but then one has to handle the case that the inputs get merged, the outputs fail to get merged, and one has to add them in a later block, thus the simplest possible algorithm leaks some data associating inputs and outputs to some people, while the most private, and most efficient, algorithm has to handle more cases.

The more private algorithm is more efficient, because the consensus process has less to consense about, albeit it is less efficient for the peer injecting his transaction, but more efficent for the peer trying to get agreement between all peers, on the total state of the blockchain, which is apt to be the slow and expensive part.

Most efficient way for fast consensus is to first generate a total order over transaction inputs, blacklisting double spends in the process, which gives the recipient fast confidence that his money has come through, then generate a proof of validity of the transactions referenced, then introduce the outputs and proof of the outputs, which the recipient does not need in a hurry, He needs that so that he can spend the output, but does not need that to know he will be able to spend the output, so will not mind if that takes a while. But this faster and more efficient process requires some additional complexity to make sure that transaction outputs do not get lost. It is considerably simpler, though slower, less efficient, and less private, to keep the transactions and outputs together. But keeping them together means that the recipient does not know he has received the money until he can actuallly spend the money. Which is going to take longer.

Which sharding requires the prover to have a proof of the properties of the preimage of a hash, without needing the keep the preimage around outside of the address range he is working on.

Which is to say the recursive snark algorithm has to be able to efficiently handle hashes inside the proof, which is a hard problem at which most recurive snark algorithms fail. Plonky2 is acceptbly efficient, but has other problems in that though it is theoretically open sources, last time I ckecked, there were caveats both practical and legal.

merkle patricia tree

A merkle patricia tree is a representation of an SQL index as a merkle tree. Each edge of a vertex is associated with a short bitstring, and as you go down the tree from the root (tree graphs have their root at the top and their leaves at the bottom, just to confuse the normies) you append that bitstring, and when you reach the edge (hash) that points to a leaf, you have a bitstring that corresponds to path you took through the merkle tree, and to the leading bits of the bitstring that make that key unique in the index. Thus the SQL operation of looking up a key in an index corresponds to a walk through the merkle patricia tree guided by the key. So we can generate a recursive snark that proves you found something in an index of the blockchain, or proves that something, such as a previous spend of the output you want to spend, does not exist in that index.

This is equivalent to implementing an SQL style index inside a key value store database whose keys are random twofiftysix bit integers. Which people tend to wind up doing when working with a no-sql key value store databases. You tend to wind up doing your own custom implementation of SQL inside the no-sql. Which we will have to ensure that a transaction output can only be committed to an unspent transaction once, equivalent to a UNIQUE SQL index, so will need an SQL like index of transaction outputs.

A hash value store is a no-SQL key value store, which has no concept of a UNIQUE index.

Blockchain

Each block in the chain is an set of SQL tables, represented as merkle dags.

So a merkle patricia tree and the structs that its leaf edges point to is an SQL table that you can generate recursive snarks for, which can prove things about transactions in that table. We are unlikely to be programming the blockchain in SQL, but to render what one is doing intelligible, it is useful to think and design in SQL.

So with recursive snarks you can prove that that your transaction is valid because certain unspent transaction outputs were in the SQL index of unspent transaction outputs, and were recently spent in the index of commitments to transactions, without revealing which outputs those were, or what was in your transaction.

It is a widely shared public index. But what it is an index of is private information about the transactions and outputs of those transactions, information known only to the parties of those transactions. It is not a public ledger. It is a widely shared public SQL index of private ledgers. And because it is a merkle tree, it is possible to produce a single reasonably short recursive snark for the current root of that tree that proves that every transaction in all those private ledgers was a valid transaction and every unspent transaction output is as yet unspent.

performing a transaction

Oops, what I just described is a whole sequence of complete immutable SQL indexes, each new block a new complete index. But that would waste a whole lot of bandwidth. What you want is that each new block is only an index of new unspent transaction outputs, and of newly spent transaction outputs, which spending events will give rise to new unspent transaction outputs in later blocks, and that this enormous pile of small immutable indexes gets summarized as single mutable index, which gets complicated. I will get to that later how we purge the hashes of used outputs from the public broadcast channel, winding up with a public broadcast channel that represents a mutable index of an immutable history, with a quite a lot of additional house keeping data that tells how to derive the mutable index from this pile of immutable indices, and tells us what parts of the immutable history only the parties to the transaction need to keep around any more, what can be dumped from the public broadcast channel. Anything you no longer need to derive the mutable index, you can dump.

The parties to a transaction agree on a transaction typically two humans and two wallets, each wallet the client of a peer on the blockchain.

Those of them that control the inputs to the transaction (typically one human with one wallet which is a client of one peer) commit unspent transactions outputs to that transaction, making them spent transaction outputs. But does not reveal that transaction, or that they are spent to the same transaction though his peer can probably guess quite accurately that they are. The client creates a proof that this an output from a transaction with valid inputs, and his peer creates a proof that the peer verified the client's proof and that output being committed was not already committed to another different transaction, and registers the commitment on the blockchain. The output is now valid for that transaction, and not for any other, without the total order broadcast channel containing any information about the transaction of which it is an output, nor the transaction of which it will become an input.

In the next block that is a descendant of that block the parties to the transaction prove that the new transaction outputs are valid, and being new are unspent transaction outputs, without revealing the transaction outputs, nor the transaction, nor the inputs to that transaction.

You have to register the unspent transaction outputs on the public index, the total order broadcast channel, within some reasonable time, say perhaps below block height \lfloor(h/32⌋+2\rfloor)*32, where h is the block height on which the first commit of an output to the transaction was registered. If not all the inputs to the transaction were registered, then obviously no one can produce a proof of validity for any of the outputs. After that block height you cannot register any further outputs, but if you prove that after that block height no output of the transaction was registered, you can create a new unspent transaction output for each transaction input to the failed transaction which effectively rolls back the failed transaction. This time limit enables us to recover from failed transactions, and, perhaps, more importantly, enables us to clean up the mutable SQL index that the immense chain of immutable SQL indexes represents, and that the public broadcast channel contains. We eventually drop outputs that have been committed to a particular transaction, and can then eventually drop the commits of that output without risking orphaning valid outputs that have not yet been registered in the public broadcast channel.

summarizing away useless old data

So that the public broadcast channel can eventually dump old blocks, and thus old spend events, every time we produce a new base level block containing new events (an SQL index of new transaction outputs, and an SQL index table with the same primary of spend commitments of past unspent transaction outputs to transactions) we also produce a consolidation block, a summary block that condenses two past blocks into one summary block, thus enabling the two past blocks that it summarizes to be dropped.

Immediately before forming a block of height 2n+1, which is a block height whose binary representation ends in a one, we use the information in base level blocks 2n-3, 2n-2, 2n-1, and 2n to produces a level one summary block that allows base level blocks 2n-3 and 2n-2, the two oldest remaining base level blocks to be dropped. When we form the block of height 2n+1, it will have an edge to the block of height 2n, forming a chain, and an edge to the summary block summarizing blocks 2n-3 and 2n-2, forming a tree.

At every block height of 4n+2. which is a block height whose binary representation ends in a one followed by a zero, we use the information in the level one summary blocks for heights 4n-5, 4n-3, 4n-1, and 4n+1, to produce a level two summary block that allows the level one summary blocks for 4n-5 and 4n-3, the two oldest remaining lever one summary blocks, to be dropped. The base level blocks are level zero.

At every block height of 8n+4. which is a block height whose binary representation ends in a one followed by two zeroes, we use the information in the level two summary blocks for heights 8n-10, 8n-6, 8n-2, and 8n+2, to produce a level three summary block that allows the level two summary blocks for 8n-10 and 8n-6, the two oldest remaining level two summary blocks, to be dropped.

And similarly, for every block height of 2^{m+1}*n + 2^m, every block height whose binary representation ends in a one followed by m zeroes, we use the information in four level $m$ summary blocks, the blocks 2^{m+1}*n + 2^{m-1}- 4*2^{m}, 2^{m+1}*n + 2^{m-1}- 3*2^{m}, 2^{m+1}*n + 2^{m-1}- 2*2^{m}, and 2^{m+1}*n + 2^{m-1}- 1*2^{m} to produce an m+1 summary block that allows the two oldest remaining level m summary blocks, the blocks 2^{m+1}*n + 2^{m-1}- 4*2^{m} and 2^{m+1}*n + 2^{m-1}- 3*2^{m} to be dropped.

It is not sufficient to merely forget about old data. We need to regenerate new blocks because the patricia merkle tree presented by the public broadcast channel has to prove that outputs that once were registered as unspent, and then registered to a commit, or sequence of commits, are no longer registered at all.

We summarise the data in the earliest two blocks by discarding every transaction output that was, at the time those blocks were created, an unspent transaction output, but is now marked as used in any of the four blocks by committing it to a particular transaction. We discard commits which refer to outputs that have now been discarded by previous summary blocks and have timed out, which is to say, commits in a level m summary block being summarised into a level m+1 summary block that reference outputs in the immediately previous level m+1 summary block. However if, a commit references an output that is now in a summary block of level greater than m+1, that commit has to be kept around to prevent double spending of the previous output, which has not yet been summarised away.

We produce the summary block of past blocks just before we produce the base level block, and the base level block has an edge pointing to the previous base level block, a chain edge, and an edge pointing to the just created summary block a tree edge, a chain edge, has two edges, a chain edge and a tree edge. And when we summarize two blocks into a higher level summary block, their chain and tree edges are discarded, because pointing to data that the reliable broadcast channel will no longer carry, and the newly created summary block gets a chain edge pointing to the previous summary block at the same level, and tree edge pointing to the previous higher level summary block.

We have to keep the tree around, because in order to register a commit for an output in the blockchain, we have to prove no previous commit for that output in any of the previous blocks in the tree, back to the block or summary block in which the output is registered. Only the client wallets of the parties to the transaction can produce a proof that a commit is valid if no previous commit, but only a peer can prove no previous commit.

So the peer, who may not necessarily be controlled by the same person as controls the wallet, will need to know the hashes of the inputs to the transaction, and could sell that information to interested parties, who may not necessarily like the owner of the client wallet very much. But the peer will not know the preimage of the hash, will not know the value of the transaction inputs, nor what the transaction is about. It will only know the hashes of the inputs, and does not even need to know the hashes of the outputs, though if the client wallet uses the same peer to register the change output, the peer will probably be able to reliably guess that that output hash comes from that transaction, and therefore from those inputs. If Bob is paying Ann, neither Bob's peer nor Ann's peer knows that Bob is paying Ann. If Bob is paying Ann, and gets a proof his transaction is valid from his peer, and he registers his change coin through his peer, and Ann registers her payment coin through her peer, his peer has no idea what the hash of that payment output was, and Ann's peer therefore has no way of knowing where it came from.

Instead of obfuscating the data on public broadcast channel with clever cryptography that wastes a a great deal of space, as Monero does, we just do not make it public in the first place, resulting in an immense reduction in the storage space required for the blockchain, a very large reduction in the bandwidth, and a very large reduction of the load on peers. They do not have download and validate every single transaction, which validation is quite costly, and more costly with Monero than Bitcoin.

Once all the necessary commits have been registered on the total order broadcast channel, only the client wallets of the parties to the transaction can produce a proof for each of the outputs from that transaction that the transaction is valid. They do not need to publish on the total order broadcast channel what transaction that was, and what the inputs to that transaction were.

So we end up with the blockchain carrying only $\bigcirc\ln(h)$ blocks where h is the block height, and all these blocks are likely to be of roughly comparable sizes to a single base level block. So, a blockchain with as many transactions as bitcoin, that has been running as long as bitcoin, will only occupy a few dozen megabytes of disk storage, rather than near a terabyte. Bitcoin height is currently near a hundred thousand, at which height we will be keeping about fifty blocks around, instead of a hundred thousand blocks around.

If we are using Nova commitments, which are eight or nine kilobytes, in place of regular hashes, which are thirty two bytes, the blockchain will still only occupy ten or twenty gigabytes, but, if using Nova commitments, bandwidth limits will force us to shard when we reach bitcoin transaction rates. But with recursive snarks, you can shard, because each shard can produce a concise proof that it is not cheating the others, while with bitcoin, everyone has to evaluate every transaction to prove that no one is cheating.

Bigger than Visa

And when it gets so big that ordinary people cannot handle the bandwidth and storage, recursive snarks allow sharding the blockchain. You cannot shard the bitcoin blockchain, because a shard might lie, so every peer would have to evaluate every transaction of every shard. But with recursive snarks, a shard can prove it is not lying.

sidechaining

One method of sharding is sidechaining

Each transaction output contains a hash of the verification rule, one of the requirements whereby one will prove that the output was validly committed as an input to a transaction when the time comes to commit it to a transaction. One always has to prove that the transaction will not create money out of thin air, but one also has to prove the transaction was done by valid authority, and the output defines what is its valid authority. The normal and usual verification rule is to prove that the party committing the output knows a certain secret. But the verification rule could be anything, thus enabling contracts on the blockchain, and could instead be that a valid current state of the sidechain, which is a valid descendant of the state previously used in the previous similar transaction that created this output, committed this output as an input to the new transaction -- in which case the output represents the money on a sidechain, and the transaction moves money between the sidechain and mainchain.

This hash allows anyone to innovate some sidechain idea, or some contract idea, without needing everyone else on the blockchain to buy in first. The rest of the blockchain does not have to know how to verify valid authority, does not need to know the preimage of the hash of the method of verification, just verify that the party committing did the correct verification, whatever it was. Rather than showing the total order broadcast channel a snark for the verification of authority, which other people might not be able to check, the party committing a transaction shows it a recursive snark that shows that he verified the verification of authority using the verification method specified by the output, without bloating the public broadcast channel by revealing what method the output specified. What that method was, outsiders do not need to know, reducing the burden of getting everyone playing by the same complex rules. If a contract or a sidechain looks indistinguishable from any other transaction, it not only creates privacy and reduces the amount of data that other people on the blockchain have to handle and know how to handle, it also radically simplifies blockchain governance, bringing us closer to the ideal of transactions over distance being governed by mathematics, rather than men.

Private ledger

An enterprise derives its collective existence from its ledger. The enterprise as a collective entity is a thirteenth century accounting fiction that fourteenth century businessmen imagined into reality.

For sovereign corporations, a great deal of corporate governance can be done by the laws of mathematics, rather than the laws of men, which was one of the original cypherpunk goals and slogans that Satoshi was attempting to fulfil. We always intended from the very beginning to destroy postmodern capitalism and restore the modern capitalism of Charles the Second.

The commits form a directed acyclic graph. Each particular individual who knows the preimage of some of the hashes of outputs and commits committed to the public broadcast channel knows some paths through the directed acyclic graph.

One of those paths corresponds to his private ledger, for which eventually we should write database and bookkeeping software. And that path can prove the ledgers immutable and append only.

But we would like him to be able to prove to a counterparty that his ledger is immutable and append only, and that the information he is showing the counterparty is consistent with the information he shows every other counterparty

To accomplish this, an output needs to be able to own a name and the associated public key, thus the name identifies a single path through the merkle dag, and it is possible to prove the ledger consistent along this named path.

And we want him to be able to prove that he is showing facts about his ledger that are consistent with everyone else's ledgers. To do that, triple entry accounting, where a journal entry that lists an obligation of a counterparty as an asset, or an obligation to a counterparty as a liability, references a jointly signed row that must exist in both party's ledgers, jointly signed by the non fungible name tokens of both parties.

Double entry accounting shows the books balance. Triple entry accounting shows that obligations between parties recorded on their books balance. Triple entry accounting was originally developed by cypherpunks when we were attempting to create internet money based on moving ownership of gold around.

Such a non fungible name tokens would also be necessary for a reputation system, if we want to eat Amazon's and Ebay's lunch.