diff --git a/docs/design/TCP.md b/docs/design/TCP.md
index dd5722a..50d675f 100644
--- a/docs/design/TCP.md
+++ b/docs/design/TCP.md
@@ -102,7 +102,7 @@ upper bound. To find the actual MTU, have to have a don't fragment field
(which is these days generally set by default on UDP) and empirically
track the largest packet that makes it on this connection. Which TCP does.
-MTU (packet size) and MSS (data size, $MTU-40$) is a
+MTU (packet size) and MSS (data size, $MTU-40$) is a
[messy problem](https://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/25885-pmtud-ipfrag.html)
Which can be side stepped by always sending packets
of size 576 contiaing 536 bytes of data.
diff --git a/docs/design/peer_socket.md b/docs/design/peer_socket.md
index 90c2eda..5d712cf 100644
--- a/docs/design/peer_socket.md
+++ b/docs/design/peer_socket.md
@@ -38,7 +38,7 @@ deserializes.
## layer responsibilities
The sockets layer just sends and receives arbitrary size blocks
-of opaque bytes over the wire between two machines.
+of opaque bytes over the wire between two machines.
They can be sent with or without flow control
and with or without reliability,
but if the block is too big to fit in this connection's maximum
@@ -68,8 +68,8 @@ well be handled by an instance of a class containing only a database index.
# Representing concurrent communicating processes
node.js represents them as continuations. Rust tokio represents them
-as something like continuations. Go represents them lightweight
-threads, which is a far more natural and easier to use representation,
+as something like continuations. Go represents them lightweight
+threads, which is a far more natural and easier to use representation,
but under the hood they are something like continuations, and the abstraction
leaks a little. The abstraction leaks a little in the case you have one
concurrent process on one machine communicating with another concurrent
@@ -128,7 +128,7 @@ to a function pointer, or something that is exactly equivalent to a function poi
Of course, we very frequently do not have any state, and you just
cannot have a member function to a static function. One way around
this problem is just to have one concurrent process whose state just
-does not change, one concurrent process that cheerfully handles
+does not change, one concurrent process that cheerfully handles
messages from an unlimited number of correspondents, all using the same
`in-regards-to`, which may well be a well known named number, the functional
equivalent of a static web page. It is a concurrent process,
@@ -144,7 +144,7 @@ it looks up the in-reply-to field in the database to find
the context. But a database lookup can hang a thread,
which we do not want to stall network facing threads.
-So we have a single database handling thread that sequentially handles a queue
+So we have a single database handling thread that sequentially handles a queue
of messages from network facing threads driving network facing concurrent
processes, drives database facing concurrent processes,
which dispatch the result into a queue that is handled by
@@ -186,7 +186,7 @@ or is implicit in the in-reply-to field.
If you could be receiving events from different kinds of
objects about different matters, well, you have to have
different kinds of handlers. And usually you are only
-receiving messages from only one such object, but in
+receiving messages from only one such object, but in
irritatingly many special cases, several such objects.
But it does not make sense to write for the fully general case
@@ -195,7 +195,7 @@ case ad-hoc by a special field, which is defined only for this
message type, not defined as a general quality of all messages.
It typically makes sense to assume we are handling only one kind
-of message, possibly of variant type, from one object, and in
+of message, possibly of variant type, from one object, and in
the other, special, cases, we address that case ad hoc by additional
message fields.
@@ -256,7 +256,7 @@ maybe Asio is broken at the core
And for flow control, I am going to have to integrate Quic,
though I will have to fork it to change its security model
from certificate authorities to Zooko names. You can in theory
-easily plug any kind of socket into Asio,
+easily plug any kind of socket into Asio,
but I see a suspicious lack of people plugging Quic into it,
because Quic contains a huge amount of functionality that Asio
knows nothing of. But if I am forking it, can probably ignore
@@ -317,7 +317,7 @@ message handling infrastructure that keeps track of state.
## receive a message with no in‑regards‑to field, no in‑reply‑to field
This is directed to a re-entrant function, not a functor,
-because re‑entrant and stateless.
+because re‑entrant and stateless.
It is directed according to message type.
### A message initiating a conversation
@@ -364,14 +364,14 @@ the in‑reply‑to field, but this would result in state management and state
transition of huge complexity. The expected usage is it has a small
number of static fields in its state that reference a limited number
of recently sent messages, and if the incoming message is not one
-of them, it treats it as an error. Typically the state machine is
+of them, it treats it as an error. Typically the state machine is
only capable of handling the
response to its most recent message, and merely wants to be sure
that this *is* a response to its most recent message. But it could
have shot off half a dozen messages with the in‑regards‑to field set,
and want to handle the response to each one differently.
Though likely such a scenario would be easier to handle by creating
-half a dozen state machines, each handling its own conversation
+half a dozen state machines, each handling its own conversation
separately. On the other hand, if it is only going to be a fixed
and finite set of conversations, it can put all ongoing state in
a fixed and finite set of fields, each of which tracks the most
@@ -392,7 +392,7 @@ He creates an instance of a state machine (instance of a message
handling class) and sends a message with no in‑regards‑to field
and no in‑reply‑to field but when he sends that initial message,
his state machine gets put in, and owned by,
-the dispatch table according to the message id.
+the dispatch table according to the message id.
Carol, on receiving the message, also creates a state machine,
associated with that same message id, albeit the counterparty is
@@ -403,7 +403,7 @@ Bob's dispatch layer dispatches to the appropriate state machine
And then it is back and forth between the two stateful message handlers
both associated with the same message id until they shut themselves down.
-
+
## factoring layers.
A layer is code containing state machines that receive messages
diff --git a/docs/design/proof_of_share.md b/docs/design/proof_of_share.md
index 644210b..4dbde8a 100644
--- a/docs/design/proof_of_share.md
+++ b/docs/design/proof_of_share.md
@@ -131,7 +131,7 @@ One way out of this is proof of share, plus evidence of good
connectivity, bandwidth, and disk speed. You have a crypto currency
that works like shares in a startup. Peers have a weight in
the consensus, a likelihood of their view of the past becoming the
-consensus view, that is proportional to the amount of
+consensus view, that is proportional to the amount of
crypto currency their client wallets possessed at a certain block height,
$\lfloor(h−1000)/4096\rfloor∗4096$, where $h$ is the current block height,
provided they maintain adequate data, disk access,
@@ -167,7 +167,7 @@ to the amount of share it represents.
Each peer sees the weight of a proposed block as
the median weight of the three highest weighted peers
that it knows know or knew of the block and its contents according to
-their current weight at this block height and perceived it has highest
+their current weight at this block height and perceived it has highest
weighted at the time they synchronized on it, plus the weight of
the median weighted peer among up to three peers
that were recorded by the proposer
@@ -193,9 +193,9 @@ favors peers with good bandwidth and data access, and peers that
are responsive to other peers, since they have more and better connections
thus their proposed block is likely become widely known faster.
-If only one peer, the proposer, knows of a block, then its weight is
+If only one peer, the proposer, knows of a block, then its weight is
the weight of the proposer, plus previous blocks but is lower than
-the weight if any alternative block whose previous blocks have the
+the weight if any alternative block whose previous blocks have the
same weight, but two proposers.
This rule biases the consensus to peers with good connection and
@@ -203,14 +203,14 @@ good bandwidth to other good peers.
If comparing two proposed blocks, each of them known to two proposers, that
have chains of previous blocks that are the same weight, then the weight
-is the lowest weighted of the two, but lower than any block known to
+is the lowest weighted of the two, but lower than any block known to
three. If known to three, the median weight of the three. If known to
a hundred, only the top three matter and only the top three are shared
around.
It is thus inherently sybil proof, since if one machine is running
a thousand sybils, each sybil has one thousandth the share
-representation, one thousandth the connectivity, one thousandth
+representation, one thousandth the connectivity, one thousandth
the random disk access, and one thousandth the cpu.
# Collapse of the corporate form
diff --git a/docs/estimating_frequencies_from_small_samples.md b/docs/estimating_frequencies_from_small_samples.md
index b8746cb..af6dae6 100644
--- a/docs/estimating_frequencies_from_small_samples.md
+++ b/docs/estimating_frequencies_from_small_samples.md
@@ -151,10 +151,26 @@ $$P_{new}(ρ) = \frac{ρ × Ρ_{prior}(ρ)}{P_X}$$
# Beta Distribution
+Here I discuss the Beta distribution for a zero dimensional observable.
+The observable is either true or false, green or not green, and $α$ and $β$
+are continuous quantities, real numbers.
+
+If the observable is a real number, then $α$ and $β$ are size two vectors, points
+in a two dimensional spac, and this is is known as the gamma distribution.
+If the observable is a two dimensional vector,
+a point in a two dimensionals space, then $α$ and $β$ are size three vectors,
+points in a three dimensional space, and this is known as the delta distribution
+
The Beta distribution is
$$P_{αβ}(ρ) = \frac{ρ^{α-1} × (1-ρ)^{β-1}}{B(α,β)}$$
where
$$B(α,β) = \frac{Γ(α) × Γ(β)}{Γ(α + β)}$$
+$ρ$ is the *real* probability that the ball will be green, and $α$ and $β$
+represent our prior for the likely probability of this probability.
+In the delta distribution, the probability not of true or false, but
+of a poisson distribution, which itself the probabilility of a value,
+so we have another level of recursion.
+
$Γ(α) = (α − 1)!$ for positive integer α\
$Γ(1) = 1 =0!$\
@@ -211,7 +227,7 @@ counts as evidence. To apply, we need to take into account *all*
evidence, and everything in the universe has some relevance.
Thus to answer the question “what proportion of men are mortal” the
-principle of maximum entropy, naiely applied, leads to the conclusion
+principle of maximum entropy, naively applied, leads to the conclusion
that we cannot be sure that all men are mortal until we have first checked
all men. If, however, we include amongst our priors the fact that
all men are kin, then that all men are X, or no men are X has to have a
@@ -229,7 +245,7 @@ factor, so that what was once known with extremly high probability, is now
only known with reasonably high probability. There is always some
unknown, but finite, substantial, and growing, probability of a large
change in the state of the network, rendering past evidence
-irrelevant.
+irrelevant.
Thus any adequately flexible representation of the state of the network
has to be complex, a fairly large body of data, more akin to a spam filter
@@ -237,7 +253,9 @@ than a boolean.
# A more realistic prior
-## The beta distribution
+## The beta distribution corrected
+
+Corrected for the expectation that our universe is orderly and predictable
The Beta distribution has the interesting property that for each new test,
the Baysian update of the Beta distribution is also a Beta distribution.
@@ -245,7 +263,10 @@ the Baysian update of the Beta distribution is also a Beta distribution.
Suppose our prior, before we take any samples from the urn, is that the probability that the proportion of samples in the urn that are X is ρ is
$$\frac{1}{3}P_{11} (ρ) + \frac{1}{3}δ(ρ) + \frac{1}{3}δ(1-ρ)$$
-We are allowing for a substantial likelihood of all X, or all not X.
+We are allowing for a substantial likelihood of all X, or all not X,
+whereas the ordinary beta prior is the absence of information is that
+the likelihood of all X or no X is infinitesimal. Realistically, it
+is substantial.
If we draw out $m + n$ samples, and find that $m$ of them are X, and $n$ of
them are not X, then the $δ$ terms drop out, and our prior is, as usual the
@@ -291,6 +312,11 @@ Which corresponds to our intuition on the question “all men are mortal” If w
In contrast, if we assume the beta distribution, this implies that the likelihood of the run continuing forever is zero.
+Similarly, to correct the delta distribution, we have to add the assumption that the likelihood of certain trivial poisson distributions that have infinitesimal likelihood in delta distribution have finite likelihood. Though in many realistic
+cases this is not going to be the cases, for a real is usually going to have some
+finite distribution, and thus it is more likely to be OK to use the uncorrected
+delta distribution
+
## the metalog (metalogistic) distribution
The metalogistic distribution is like the Beta distribution in that
diff --git a/docs/libraries.md b/docs/libraries.md
index b0d06b4..965511b 100644
--- a/docs/libraries.md
+++ b/docs/libraries.md
@@ -11,6 +11,25 @@ It should be treated as a bunch of hints likely to point the reader
in the correct direction, so that the reader can do his homework
on the appropriate library. It should not be taken as gospel.
+# Rust blockchain related libraries
+
+Most important work in blockchains these days appears to be on rust.
+
+I am behind the times.
+
+[Awesome Blockchain Rust](https://rustinblockchain.org/awesome-blockchain-rust/#layer2){target="_blank"}
+
+Rust is most ways the best system for large projects that can be worked on by many people and
+installed easily by many people, but it has the huge defect of a almost endless learning curve,
+and that its compile on very large programs takes a very long time. With C++, if you change one
+file, it recompiles very quickly. Since Rust link basically does not work, it has to recompile
+everything, which makes coding on large programs very slow. This is a killer if you are generating
+an executable that does a lot of things. And anything with a gui *does* do a lot of things.
+
+So if you try to create a gui program in Rust you wind up using a rust wrapper
+around a gui written in some other language, which results in a gui that sucks.
+Rust however, is best for a daemon running in the background that does networking.
+
# Recursive snarks
A horde of libraries are rapidly appearing on GitHub,
@@ -18,15 +37,96 @@ most of which have stupendously slow performance,
can only generate proofs for absolutely trivial things,
and take a very long time to do so.
+[Blog full of the latest hot stuff](https://giapppp.github.io/posts/){target="_blank"}
+
+
+[An Analysis of Polynomial Commitment Schemes: KZG10, IPA, FRI, and DARKS]
+(https://medium.com/@ola_zkzkvm/an-analysis-of-polynomial-commitment-schemes-kzg10-ipa-fri-and-darks-a8f806bd3e12){target="_blank"}
+
+[Inner Product Arguments](https://dankradfeist.de/ethereum/2021/07/27/inner-product-arguments.html){target="_blank"}
+A basic explanation of polynomial commitments on ordinary curves
+
+Verification is linear in the length of the polynomial, and logarithmic in the number of polynomials, so you want a commitment that is to quite a lot of short fixed length polynomials. All the polynomials are of the same fixed length
+defined by the protocol, but the number of polynomials can
+be variable. Halo 2 can reference relative fields -- you can have a proof that a value committed by N polynomials bears some relationship to the value in polynomial N+d
+
+[most efficient pairing curves still standing]:https://arxiv.org/pdf/2212.01855
+
+
+A whole lot of pairing curves have fallen to recent attacks.
+
+The [most efficient pairing curves still standing] at the time that this paper was written are the BLS 12-381 curves. (126 bits security)
+
+ZCash, Ethereum, Chia Netork, and Algorand, have all gone with BLS 12-381, so these probably have the best developed libraries.
+
+[IETF pairing curve paper]:https://www.ietf.org/archive/id/draft-irtf-cfrg-pairing-friendly-curves-10.ht
+
+The [IETF pairing curve paper] has a list of libraries
+
+## Dory
+
+[192 bit polynomial commits]:https://eprint.iacr.org/2020/1274.pdf
+
+[192 bit polynomial commits].
+
+> Dory is also concretely efficient: Using one core and
+setting $n = 2^{20}$, commitments are 192 bytes.
+Evaluation proofs are 18 kB, requiring
+3 s to generate and 25 ms to verify.
+For batches at $n = 2^{20}$, the marginal
+cost per evaluation is <1 kB communication,
+300 ms for the Prover and 1 ms for the Verifier.
+
+Seems to generate a verkle tree of polynomial commits (a verkle trie being
+the polynomial commitment equivalent of a merkle trie,
+and prove something about the preimage of the root
+-- which sounds like exactly what doctor ordered.
+You prove the preimage of each vertex on the path,
+and then prove the things about leaf part of the pre-image.
+
+## Nova
+
[Nova]:https://github.com/microsoft/Nova
{target="_blank"}
-[Nova] claims to be fast, is being frequently updated, needs no trusted setup, and other people are writing toy programs using [Nova].
+[Nova white paper](https://eprint.iacr.org/2021/370.pdf){target="_blank"}
+
+The folded proof has to contain an additional proof that the folding was done docrrectly.
+
+Plonk, or Groff16 can be used as a proof system inside Nova. Recursion is a very low cost,
+but its native language is inherently relaxed R1CS and plonkish and Groff16 gets
+translated back and forth to relaxedR1CS
+
+[Nova] claims to be fast, is being frequently updated, needs no trusted setup,
+and other people are writing toy programs using [Nova]. You tube videos report some
+real programs going into Nova -- to reduce the horrific cost of snarks and recursive snarks.
[Nova] claims you can plug in other elliptic curves, though it sounds like you
might need alarmingly considerable knowledge of elliptic curves in order to
do so.
+Noval can be used, is intended to be used, and is being used as a preprocessing step to
+give you the best possible snark, but should be considered as standalone,
+as mimblewimble used polynomial commits alone.
+
+The standard usage is incrementally verifiable computation, a linear chain,
+but to get full trie computation, you have man instances doing the heavy lifting,
+which communicate by "proof carrying data"
+
+[You tube video](https://www.youtube.com/watch?v=SwonTtOQzAk) says nova,
+bulletproofs *modified from R1CS to relaxed R1CS*,
+and we can have a trie of provers. Well, if we have a trie of provers,
+why should not anyone who wants to inject a transaction be a prover?
+And if everyone is a prover, we need no snarks.
+
+Everyone shares proving that he alone has done and only he needs encrypted a form
+that only he can read among thirty two or so neighbors,
+and similarly stuff that only two of them can read,
+only four of them can read, in case he crashes, goes offline,
+and loses his state.
+
+Nova requires the vm to be standardized and repetitious.
+
Plonky had a special purpose hash, such that it was
easy to produce recursive proofs about Merkle trees.
I don't know if Nova can do hashes with useful speed, or hashes at all,
@@ -83,6 +183,14 @@ That is procedural, but expressing the relationships is not.
Since your fold is the size of the largest hamiltonian circuit so far,
you want the steps to be all of similar size.
+## Halo 2
+
+Halo 2 is a general purpose snark library created for ZCash,
+replacing their earlier library and using a different curve.
+It directly supports performing an SHA2 hash inside the proof and verification.
+I don't know how fast that is, and I did not immediately find
+any examples recursing over an SHA merkle tree and merkle chain.
+
This suggests a functional language (sql). There are, in reality,
no purely functional languages for Turing machines.
Haskell has its monads, sql has update, insert, and delete.
@@ -106,8 +214,8 @@ The proof language, as is typical of purely functional languages,
such as Rust or C++, but the proof language should have no concept of that.
if the proof language asserts that $1 \leq 0 \lor n<20 \implies f(n-1)= g(f(n))$,
- the ordinary procedural language will likely need to
- generate the values of $f(n) for n=1 to 19,
+ the ordinary procedural language will likely need to
+ generate the values of $f(n) for n=1 to 19,
and will need to cause the proof language to generate proofs for each value
of $n$ for 1 to 19, but the resulting proof will be independent
of the order in which these these proofs were generated.
@@ -128,8 +236,8 @@ does not represent the good stuff.
## Freenet
-Freenet has long intended to be, and perhaps is, the social
-application that you have long intended to write,
+Freenet has long intended to be, and perhaps is, the social
+application that you have long intended to write,
and has an enormous coldstart advantage over anything you could write,
no matter how great.
@@ -142,12 +250,12 @@ One big difference is that I think that we want to go after the visible net,
where network addresses are associated with public keys - that the backbone should be
ips that have a well known and stable relationship to public keys.
-Which backbone transports encrypted information authored by people whose
+Which backbone transports encrypted information authored by people whose
public key is well known, but the network address associated with that
public key cannot easily be found.
Freenet, by design, chronically loses data. We need reliable backup,
-paid for in services or crypto currency.
+paid for in services or crypto currency.
filecoin provides this, but is useless for frequent small incremental
backups.
@@ -248,7 +356,7 @@ likely uptime, downtime, or IP stability of an entity. Which cannot
in fact be known, but you can form probability of a probability estimates.
What is needed is that everyone forms their own probability of a probability.
-And they compare what they know
+And they compare what they know
(they have a high probability of a probability)
with other party's estimates, and rate the other party's reliability accordingly
If we add to that probability of probability estimate of IP and port stability
@@ -442,7 +550,7 @@ git symbolic-ref HEAD refs/heads/«our_new_branch»
If you fail to set your remote to your team's default branch, then your
local repositories will keep getting reset back to their team's branch, and
-chaos ensues.
+chaos ensues.
When moving the remote of a submodule in your local repository (usually from their team's remote to your team's remote) you update `.gitmodules` in your superproject, and in each submodule that has submodules of its own, then
@@ -579,7 +687,7 @@ you rely someone else's compiled code, things break and you get
accidental and deliberate backdoors, which is a big concern when you are
doing money and cryptography.
-When your submodules are simply your copy of someone else code, it gets
+When your submodules are simply your copy of someone else code, it gets
little bit messy. When you change them, it gets messier.
And visual studio's handling of submodules is just broken and buggy. A
@@ -593,14 +701,14 @@ end of mysterious grief ensues, because strange and curiously difficult to
identify differences appear between builds that Git would normally ensure
are the same build. Submodules are a halfway house between completely
absorbing the other party's code into your code, and using it as a prebuilt
-library. Instead, we have walls dividing the project into pieces, which is a
+library. Instead, we have walls dividing the project into pieces, which is a
lot less grief than on big pile of code, but managing those walls winds up
taking a lot of time, and mistakes get made because a git commit in a
project with submodules that have changed does not mean quite the same
thing, nor have quite the same behaviour, as git commit in a project with
unchanging submodules. But then truly integrating a project that is the
product of a great deal of time by a great many of people, and managing it
- thereafter, is likely to take up a great deal more time.
+ thereafter, is likely to take up a great deal more time.
Git Submodules is hierarchical, but source code has strange loops. The
Bob module uses the Alice module and the Carol module, but Alice uses
@@ -761,7 +869,7 @@ Cmake does not really work all that well with the MSVC environment.\
## vscode
-Vscode has taken the correct path, for one always winds up with a full
+Vscode has taken the correct path, for one always winds up with a full
language and full program running the build from source, and they went
with javascript. Javascript is an unworkable language that falls apart on
any large complex program, but one can use typescript which compiles to javascript.
@@ -841,7 +949,7 @@ libraries, but I hear it cursed as a complex mess, and no one wants to
get into it. They find the far from easy `cmake` easier. And `cmake`
runs on all systems, while autotools only runs on linux.
-MSYS2, which runs on Windows, supports autotools. So, maybe it does run
+MSYS2, which runs on Windows, supports autotools. So, maybe it does run
on windows.
[autotools documentation]:https://thoughtbot.com/blog/the-magic-behind-configure-make-make-install
@@ -1011,7 +1119,7 @@ under bitrot, and no one is maintaining them any more.
Every ide has its own replacement for makefiles, most of them also broken
to a greater or lesser extent, and now ides are moving to CMake. If a folder has
-a CMakeLists.txt file in its root, or the CMake build file, it is a project, and
+a CMakeLists.txt file in its root, or the CMake build file, it is a project, and
the existing project files in the existing format are now obsolete, even though
they will continue to be used for a very long time.
@@ -1031,7 +1139,7 @@ the documentation for these commands, which documentation
`Cmake` runs on both Windows and Linux, and is a replacement for autotools, that runs only on Linux.
-Going with `cmake` means you have a defined standard cross platform development environment,
+Going with `cmake` means you have a defined standard cross platform development environment,
vscode` which is wholly open source, and a defined standard cross platform packaging system,
or rather four somewhat equivalent standard packaging systems, two for each platform.
@@ -1979,7 +2087,7 @@ The ICU library also provides a real regex function on unicode
(`sqlite3_strlike` being the C equivalent of the SQL `LIKE`,
providing a rather truncated fragment of regex capability)
Pretty sure the wxWidgets regex does something unwanted on unicode
-
+
`wString::ToUTF8()` and `wString::FromUTF8()` do what you would expect.
`wxString::c_str()` does something too clever by half.
@@ -1990,7 +2098,7 @@ Visual Studio to UTF‑8 with `/Zc:__cplusplus /utf-8 %(AdditionalOptions)`
And you need to set the run time environment of the program to UTF‑8
with a manifest. Not at all sure how codelite will handle manifests,
-but there is a codelite build that does handle utf-8, presumably with
+but there is a codelite build that does handle utf-8, presumably with
a manifest. Does not do it in the standard build on windows.
You will need to place all UTF‑8 string literals and string constants in a
diff --git a/docs/libraries/scripting.md b/docs/libraries/scripting.md
index f1774f5..f9324de 100644
--- a/docs/libraries/scripting.md
+++ b/docs/libraries/scripting.md
@@ -26,7 +26,7 @@ and virtual file systems, and C++ to call javascript through
Large projects tend to be written in a framework that supports typescript
-* Create React App
+* Create React App
* Next.js
* Gatsby
diff --git a/docs/manifesto/May_scale_of_monetary_hardness.md b/docs/manifesto/May_scale_of_monetary_hardness.md
index 0cee488..22062ba 100644
--- a/docs/manifesto/May_scale_of_monetary_hardness.md
+++ b/docs/manifesto/May_scale_of_monetary_hardness.md
@@ -78,7 +78,7 @@ text-align:center;">May Scale of monetary hardness
```
- [Three essays from different periods follow:]{.bigbold}
+ [Three essays from different periods follow:]{.bigbold}
diff --git a/docs/manifesto/bitcoin.md b/docs/manifesto/bitcoin.md
index 035fac1..f7cc600 100644
--- a/docs/manifesto/bitcoin.md
+++ b/docs/manifesto/bitcoin.md
@@ -5,7 +5,16 @@ title: How to move Bitcoin to recursive snarks
::: myabstract
[abstract:]{.bigbold}
-Crypto currencies based on recursive snarks are the future. Polygon's aggregated blockchains are a proposal to move the Ethereum ecosystem to recursive snarks, and if the Ethereum ecosystem moves to recursive snarks, and Bitcoin does not, Bitcoin will die.
+Crypto currencies based on recursive snarks are the future. Polygon's aggregated blockchains
+are a proposal to move the Ethereum ecosystem to recursive snarks,
+and if the Ethereum ecosystem moves to recursive snarks, and Bitcoin does not, Bitcoin will die,
+for Bitcoin is struggling with big scaling problems,
+while Polygon's aggregated blockchain will completely and permanently
+solve Ethereum's scaling problems.
+
+However BitcoinOS's Grail Bridge brings snarks, sharding,
+and potentially recursive snarks to Bitcoin.
+
:::
# Step one
diff --git a/docs/manifesto/consensus.md b/docs/manifesto/consensus.md
index bfd45a6..e52534a 100644
--- a/docs/manifesto/consensus.md
+++ b/docs/manifesto/consensus.md
@@ -6,4 +6,3 @@ sidebar: true
notmine: false
abstract: >-
Consensus is a hard problem, and gets harder when you have shards
----
diff --git a/docs/manifesto/lightning.md b/docs/manifesto/lightning.md
index f982c01..75b90fc 100644
--- a/docs/manifesto/lightning.md
+++ b/docs/manifesto/lightning.md
@@ -14,7 +14,7 @@ abstract: >-
Bitcoin's initial rise was meteroric. As the scaling limit bites, that rise is topping out.
-To get continued substantial rises in the value of bitcoin, we have to replace SWIFT and the petrodollar
+To get continued substantial rises in the value of bitcoin, we have to replace SWIFT and the petrodollar.
For future growth, it is medium of exchange time. Without medium of exchange, store of value is running out of puff.
China is hoarding gold and the fiat of its major international trading partners.
If we can get SWIFT or the petrobitcoin or both, they will hoard bitcoin instead. If we don't, they won't.
@@ -51,7 +51,7 @@ Mutiny wallet and electrum is non custodial,
but their profit model that made it possible to pay people to produce nice software
relies on rather more quiet centralization that is admitted.
-Someone who owns a lot more bitcoin than I do should fund development
+Someone who owns a lot of bitcoin should fund development
of a lightning wallet done right in order to increase the value of Bitcoin.
The easiest and fastest way to bring up software that only runs on your computer
@@ -63,7 +63,7 @@ for other people to bring up a development system that emulates your system exac
Python software is just not suitable for a program that you expect
thousands and eventually hundreds of millions of people to use.
An open source program intended for deployment on an enormous scale has to be
-C, C++, or rust. Typescript and Go also works.
+C, C++, or rust. Typescript and Go also work.
The trouble with the existing non custodial liquid lightning wallet is that it is substantially
written in python, and therefore the install is appallingly difficult, an arcane emulation of
@@ -81,7 +81,7 @@ which should be as easy as sending a text from a phone -- a text that can contai
## in person
The way an in person lightning payment should work is that you touch phones,
-and it sends a message by NFC, or you scan a QR code displayed on the other guys phone,
+and it sends a message by NFC, or you scan a QR code displayed on the other guy's phone,
and up comes a human readable bill on your phone that will typically look something like
a supermarket receipt, you click OK, and after a second or two both phones show the bill paid.
And both phones record the human readable receipt forever.
@@ -92,9 +92,10 @@ You click on a link, which can be a link in the browser, which brings up not a b
but your wallet gui, and which causes your lightning wallet to initiate end to end encrypted communications
(using its own private key and the other wallet's public key, not certificate authority https keys,
using lightning channels, not web channels) with the wallet that created that link,
-nd human readable text should appear in your wallet from that other wallet,
+and human readable text should appear in your wallet from that other wallet,
human readable text containing buttons, buttons that cause events in the wallet, not in the browser.
-And one the things that can come up in one of the wallets is a human readable bill with an OK button.
+And one of the things that can come up in one of the wallets is a human readable bill, similar
+to a supermarket receipt or an Ebay shopping cart with an OK button.
And anyone you have a channel with, or have made a payment to, or received a payment from,
should be automatically buddy listed in your wallet,
@@ -102,7 +103,7 @@ so that he can send you wallet to wallet end-to-end encrypted messages.
You will probably get wallet spam from people you have purchased stuff from,
so you will need a spam button to unbuddy them.
-The wallet should be a browser and chat app that can send and receive money --
+The wallet should be a chat app and a browser that can send and receive money --
a Nostr specialized for medium of exchange and two party private conversations.
Mutiny wallet and Alby are Nostr bitcoin lightning wallets,
@@ -120,7 +121,9 @@ We need to be able to restore from the master secret, the seed phrase, alone.
The big point and big value proposition of cryptocurrency is that
you don’t have to suffer client status, with all its grave costs, dangers, and inconveniences.
-It is client status that is the problem that bitcoin was originally created to fix.
+
+It is client status, that someone else has power over your money and transactions,
+that is the problem that bitcoin was originally created to fix.
To recover your lightning wallet you need both the master secret
*and* the current state of your lightning wallet.
@@ -140,7 +143,7 @@ is watchtowers plus some new functionality.
A watchtower receives (almost) all the data you need to restore a channel. Why then, not all?
The stuff that they do not need to know, send encrypted so that only the
possessor of the master secret, the seed phrase, can read it.
- and have at least one watchtower for every channel,
+and have at least one watchtower for every channel,
plus each watchtower gets, encrypted, a list of some of the other watchowers,
so that if you can find one watchtower from seed phrase, you can find them all.
@@ -152,9 +155,9 @@ from its seed phrase, and that watchtower keeps a collection of lists of watchto
so that it cannot read it. We implement DHT reciprocal storing incentives. If you want to be sure
other people's watchtowers keep your list, better make sure your watchtowers keep other people's lists.
-So the way it should work is that is that whenever Bob sets up a channel with Dave,
+So the way it should work is that whenever Bob sets up a channel with Dave,
they agree to set up a couple of watchtowers for a couple of other channels —
-Dave sets up a couple of watch towers to watch a couple of Bob’s other channels,
+Dave sets up a couple of watchtowers to watch a couple of Bob’s other channels,
and Bob sets up a couple of watchtowers to watch a couple of Dave’s other channels.
And from time to time Bob sends to the watchtowers watching his channels
a list of watchtowers watching his channels, in encrypted form
@@ -175,20 +178,20 @@ lightning wallet is very hard.
At present the only way to create channels is to create a unilateral single channel with zero incoming liquidity.
Creating incoming liquidity is very hard, almost impossible. You have to hope that someone else
-will uniltarally initiate a channel to you, in which case you have all incoming liquidity in that
+will unilterally initiate a channel to you, in which case you have all incoming liquidity in that
channel and no outgoing liquidity, and he has all outcoing liquidity in the channel and zero
incoming liquidity. And unless you are already a major routing node with lots of incoming and outgoing
-liquidity no one is noging to unilaterally initiate a channel to you.
+liquidity no one is going to unilaterally initiate a channel to you.
## Polygon channels.
-A channeel is a shared utxo on the basechain. It needs to be possible to create a polygon channel.
+A channel is a shared utxo on the basechain. It needs to be possible to create a polygon channel.
For example six self custodial wallets create a transaction on the base chain
that creates six channel outputs and six change outputs --
the wallets are the vertices (corners) of a six sided polygon, a hexagon,
and the channels are the edges (sides) of a six sided polygon, the sides of the hexagon.
-Each channel has half incoming liquidity, and half outgoing liquidity
+Each channel has half incoming liquidity, and half outgoing liquidity.
The transaction either succeeds completely for everyone,
or if one of the participants fails or backs out for some reason,
@@ -214,7 +217,7 @@ than the older P2SH channels that lightning is still using.)
If we implement PTLC lightning transactions, these are fully onion wrapped, so no one
can tell who you are transacting with, nor what you are communicating with them about.
-We will have the privacy Satoship hoped for, and failed to provide.
+We will have the privacy Satoshi hoped for, and failed to provide.
# end users want their local fiat
@@ -222,14 +225,16 @@ We want one person to be able to send dollars and the recipient receive pesos, a
## atomic transactions between blockchains
-[point locked]:https://voltage.cloud/blog/lightning-network-faq/point-time-locked-contracts/{target="_blank"}
+[point locked]:https://voltage.cloud/blog/lightning-network-faq/point-time-locked-contracts/
+{target="_blank"}
-such as between bitcoin, and stablecoins representing wrapped local fiat, are possible if the lightning network uses [point locked] contracts, as some lightning developers have been advocating for many years.
+such as between bitcoin, and stablecoins representing wrapped local fiat, are possible if the lightning network uses [point locked] contracts, as some lightning developers have been advocating for many years. Point locked contracts were impossible when lightning was
+written, but taproot has made them possible. Lightning -- and indeed everything -- should
+use taproot. Taproot has obsoleted existing lightning contracts and it is well past time for
+an update.
-Any lightning network on one blockchain can in principle do atomic lightning exchanges with a lightning nework on another blockchain if both blockchains support Schnorr signatures and both lightning networks employ [point locked] contracts, but this extremely difficult if they use different signature algorithms, because differences in signature algorithms obstruct scriptless scripts. and an advanced research topic in elliptic curve cryptography if they use non prime elliptic curves.
+Any lightning network on one blockchain can in principle do atomic lightning exchanges with a lightning network on another blockchain if both blockchains support Schnorr signatures and both lightning networks employ [point locked] contracts, but this is difficult if they use different signature algorithms, because differences in signature algorithms obstruct scriptless scripts. and an advanced research topic in elliptic curve cryptography if they use non prime elliptic curves.
At present, lightning transactions are hash locked, which makes contracts over lightning difficult, and thus conversions difficult without a trusted third party. For four years some developers have been advocating point locks, which would make enable many lightning applications -- dollar lightning to bitcoin lightning to peso lightning among them.
-The liquid lightning network already supports atomic swaps between tether, a US dollar stablecoin, liquid lightning, and bitcoin lightning, but the sotware is such that this is not likely many people are going to use it. One can, with some effort and linux administration skills, atomically swap bitcoin lightning for liquid lightning, then liquid lightning for tether, today. These capabilities need to have go on the bitcoin lightning blockchain, and be given an interface more suitable for normies.
-
-s
\ No newline at end of file
+The liquid lightning network already supports atomic swaps between tether, a US dollar stablecoin, liquid lightning, and bitcoin lightning, but the software is such that this is not likely many people are going to use it. One can, with some effort and linux administration skills, atomically swap bitcoin lightning for liquid lightning, then liquid lightning for tether, today. These capabilities need to have go on the bitcoin lightning blockchain, and be given an interface more suitable for normies.
diff --git a/docs/manifesto/scalability.md b/docs/manifesto/scalability.md
index 041e322..3eb18a7 100644
--- a/docs/manifesto/scalability.md
+++ b/docs/manifesto/scalability.md
@@ -273,6 +273,23 @@ transaction, but more efficent for the peer trying to get agreement between
all peers, on the total state of the blockchain,
which is apt to be the slow and expensive part.
+Most efficient way for fast consensus is to first generate a total order over transaction inputs,
+blacklisting double spends in the process,
+which gives the recipient fast confidence that his money has come through,
+then generate a proof of validity of the transactions referenced,
+then introduce the outputs and proof of the outputs,
+which the recipient does not need in a hurry,
+He needs that so that he can spend the output,
+but does not need that to know he will be able to spend the output,
+so will not mind if that takes a while.
+But this faster and more efficient process requires
+some additional complexity to make sure that transaction outputs do not get lost.
+It is considerably simpler, though slower, less efficient, and less private,
+to keep the transactions and outputs together.
+But keeping them together means that the recipient does not know
+he has received the money until he can actuallly spend the money.
+Which is going to take longer.
+
Which sharding requires the prover to have a proof of the properties of the preimage of a hash,
without needing the keep the preimage around outside of the address range he is working on.
diff --git a/docs/manifesto/social_networking.md b/docs/manifesto/social_networking.md
index 451b024..01991d6 100644
--- a/docs/manifesto/social_networking.md
+++ b/docs/manifesto/social_networking.md
@@ -163,7 +163,7 @@ group to the next, with an immense multitude of different shilling
organizations running an immense multitude of unresponsive scripts
promoting an immense multitude of different scams, initially
most of them about money rather than politics, but what killed usenet
-was political shills.
+was political shills.
In the 3rd Reich there were no chess clubs or sewing circles only
national socialist chess clubs and sewing circles. Clownworld has
adopted the policy that there be only progressive and gay chess
@@ -946,7 +946,7 @@ Then take over Bitcoin and Nostr's niche by being a better Bitcoin and a better
because recursive snarks can do lots of things, such as smart contracts, better than they can.
This is the tactic that keet, holepunch, pear, and Particl are attempting to deploy, but
-their software is so far unusable. The niche is still effectively vacant and open
+their software is so far unusable. The niche is still effectively vacant and open
for the taking.
## failure of previous altcoins to solve Cold Start.
@@ -1009,16 +1009,16 @@ being, unlike Monero, a better Bitcoin.
### Bitmessage
The lowest hanging fruit of all, (because, unfortunately, there is
-no money or prospect for money in it) is to replace Bitmessage,
+no money or prospect for money in it) is to replace Bitmessage,
Which is currently abandonware, which has ceased working on most platforms,
has never supported humanly intelligible names, and lacks the
-moderation capability to grey list, blacklist, and whitelist names
+moderation capability to grey list, blacklist, and whitelist names
on discussion groups, but is widely used
because it does not reveal the sender or recipient's IP address.
In particular, it is widely used for crypto currency payments.
So next step, after capturing its abandoned market niche,
-is to integrate payment mechanisms, in particular to integrate
+is to integrate payment mechanisms, in particular to integrate
core lightning, which brings us a little closer to the faint smell of money.
Bitmessage is long overdue to be replaced, and Keef and Particl attempted
@@ -1386,7 +1386,7 @@ owned through private secrets derived from each party's master secret.
The financial businesses of the stock exchange are creaming off a vast amount of wealth.
-Information wants to be free, but programmers want to paid. If we liberate corporations
+Information wants to be free, but programmers want to paid. If we liberate corporations
and shareholders from this tyranny, we can arrange for a very small part of this mountain
of wealth to stick to us. And a very small part of this mountain is still a mighty big
mountain.
diff --git a/docs/manifesto/sox_accounting.md b/docs/manifesto/sox_accounting.md
index 6f4d7e0..d0b5646 100644
--- a/docs/manifesto/sox_accounting.md
+++ b/docs/manifesto/sox_accounting.md
@@ -139,7 +139,7 @@ The state has been attacking the cohesion of the corporation just as it has been
the cohesion of the family. Modern corporate capitalism is incompatible with SoX,
because if your books are detached from reality.
lies that hostile outsiders demand that you believe,
-the corporation has lost that which makes it one person.
+the corporation has lost that which makes it one person.
When the books are a lie imposed on you by hostile outsiders you lose cohesion around profit,
making things, buying, selling, and satisfying the customer,
and instead substitute cohesion around gay sex, minority representation, and abortion rights.
@@ -189,12 +189,12 @@ The mortgagees did not demand id, because id racist. Much as demanding id is ra
you want to ask voters for id, but not when you want entry to a Democratic party gathering.
Enron's books implied that non-existent entities were responsible for
-paying the debts that they had incurred
+paying the debts that they had incurred
through buying stuff on credit, thus moving the debt off the books. In the Great Minority Mortgage Meltdown,
the banks claimed that people who, if they existed at all, could not possibly pay,
owed them enormous amounts of money.
-Sox has prevented businesses from moving real debts off the books. But it failed spectacularly
+Sox has prevented businesses from moving real debts off the books. But it failed spectacularly
at preventing businesses from moving unreal debts onto the books. In the Great Minority Mortgage
Meltdown, the books were misleading because of malice and dishonesty, but even if you are doing your
best to make SoX books track the real condition of your business, they don't. They track a paper
@@ -218,7 +218,7 @@ startups trying to go public, since the potential investors know that the
books do not accurately tell the investors how the business is doing.
What established businesses do instead is prepare one set of books for
-Sox compliance, and another illegal and forbidden
+Sox compliance, and another illegal and forbidden
set of books for management that do not comply
with Sox but which actually do reflect the movement and creation of
value, but a startup is not allowed to tell potential investors about the
diff --git a/docs/manifesto/triple_entry_accounting.md b/docs/manifesto/triple_entry_accounting.md
index dadb0d2..f3e079a 100644
--- a/docs/manifesto/triple_entry_accounting.md
+++ b/docs/manifesto/triple_entry_accounting.md
@@ -58,7 +58,7 @@ as a book keeping fiction, fictions imagined into reality by real people acting
It is not the buildings and the tools that are the corporation, but
the unity of action. This is what makes it possible to move
-corporations onto the blockchain, to substitute cryptographic
+corporations onto the blockchain, to substitute cryptographic
algorithms for the laws of men.
Double entry book keeping is social technology. It fundamentally shapes
diff --git a/docs/manifesto/white_paper_YarvinAppendix.md b/docs/manifesto/white_paper_YarvinAppendix.md
index 464a010..bfc936a 100644
--- a/docs/manifesto/white_paper_YarvinAppendix.md
+++ b/docs/manifesto/white_paper_YarvinAppendix.md
@@ -4,7 +4,7 @@ title: >-
...
-This a publication by Moldbug. Full of good ideas, but it is a digression from
+This a publication by Moldbug. Full of good ideas, but it is a digression from
my focus, and like all Moldbug, unduly verbose. Core idea. Hodlers, not miners,
should have the power, and hodlers need a human board that represents them.
diff --git a/docs/names/multisignature.md b/docs/names/multisignature.md
index 361bde1..29b2ec3 100644
--- a/docs/names/multisignature.md
+++ b/docs/names/multisignature.md
@@ -220,7 +220,7 @@ do not get all of it, they can calculate the signature.
# Threshold Signatures
FROST is an algorithm that produces a regular schnorr signature, and allows
-a quite large number of unequal shares.
+a quite large number of unequal shares.
A threshold signature has the interesting feature that it is a randomness
@@ -543,7 +543,7 @@ To test the signature, check that
$A*(h(Message)B_{base})=B_{base}*M$
-Which it should because
+Which it should because
$(a*B_{base})*(h($
Message
diff --git a/docs/names/petnames.md b/docs/names/petnames.md
index 3616cf7..df82c05 100644
--- a/docs/names/petnames.md
+++ b/docs/names/petnames.md
@@ -85,7 +85,7 @@ that is securely unique and memorable (but private, not global):
handles the keys only indirectly via petnames. For a particular person,
for a particular application, there is a
one-to-one mapping between a key and a petname.
-
+
- **Nicknames**
can be used to assist in discovery of
keys, and for help in selecting a petname. Nicknames are chosen by the
@@ -137,7 +137,7 @@ that is securely unique and memorable (but private, not global):
representation of the key, and under what circumstances would it bring
up petname2?
-## More Detail, and Interactions
+## More Detail, and Interactions
A good example of a nickname management system is Google. Type
in a name, and Google will return a list that includes all the entities
@@ -299,7 +299,7 @@ sources of controversy. One is, how do I get the keys transferred
around the system? The other is, "how easily can Darth Vader mimic a
petname?".
-## Transferring Keys and Purposeful Trust
+## Transferring Keys and Purposeful Trust
Transferring keys around the universe is easy; for example, plaster the
keys on all the web sites in the world that'll let you do so. The hard
@@ -684,7 +684,7 @@ security system whose user interface is written by cryptographers is no
more likely to succeed than a security system whose encryption
machinery is written by user interface designers.
-## Waterken Petname Toolbar
+## Waterken Petname Toolbar
This toolbar\[Waterken\] is a proposal for web browsers explicitly based
on petname
diff --git a/docs/names/zookos_triangle.md b/docs/names/zookos_triangle.md
index 86b5c65..8d4b289 100644
--- a/docs/names/zookos_triangle.md
+++ b/docs/names/zookos_triangle.md
@@ -318,7 +318,8 @@ payment using cryptography, but cannot create secure delivery of goods
and services using cryptography. The other side of the transaction needs
reputation.
-[sovereign corporations]:../social_networking.html#many-sovereign-corporations-on-the-blockchain
+[sovereign corporations]:../manifesto/social_networking.html#many-sovereign-corporations-on-the-blockchain
+{target="_blank"}
So the other side of the transaction needs to be authenticated by a secret
that *he* controls, not a secret controlled by registrars and certificate
diff --git a/docs/notes/big_cirle_notation.md b/docs/notes/big_cirle_notation.md
index 6d27753..eb48903 100644
--- a/docs/notes/big_cirle_notation.md
+++ b/docs/notes/big_cirle_notation.md
@@ -18,6 +18,6 @@ Which is also stupid for the same reason.
So what all engineers do in practice is use $\bigcirc$ to mean that the mathematical definition of $\bigcirc$ is true, *and* Knuths definition of $\large\Omega$ is also largely true, so when we say that an operation take that much time, we mean that it takes no more than that much time, *and frequently takes something like that much time*.
-So, by the engineer's definition of $\bigcirc$, if an algorithm takes $\bigcirc(n)$ time it does *not* take $\bigcirc(n^2)$ time.
+So, by the engineer's definition of $\bigcirc$, if an algorithm takes $\bigcirc(n)$ time it does *not* take $\bigcirc(n^2)$ time.
Which is why we never need to use Knuth's $\large\Omega$
diff --git a/docs/number_encoding.md b/docs/number_encoding.md
index f1e401d..9e74660 100644
--- a/docs/number_encoding.md
+++ b/docs/number_encoding.md
@@ -7,12 +7,12 @@ sidebar: true
I have spent far too much time implementing and thinking about
variable length quantities.
-And became deeply confused about
-suitable representation of such quantities in patricia trees
+And became deeply confused about
+suitable representation of such quantities in patricia trees
because I lost track of the levels of encapsulation.
-A patricia vertex represents a prefix with a bitcount.
-A patricia tree represents a prefix free index. A patricia vertex
+A patricia vertex represents a prefix with a bitcount.
+A patricia tree represents a prefix free index. A patricia vertex
encapsulates a bitcount of the prefix, and is encapsulated
by a bytecount of the vertex.
@@ -65,20 +65,20 @@ which is not itself part of the vertex, but part of the data structure within wh
the vertex is stored or represented.
We could *represent* the vertices by a self terminating bytestring or bitstring, but only
-if we wanted to put the vertices of a patricia tree inside *another* patricia tree. Which
-seems like a stupid thing to do under most circumstances.
+if we wanted to put the vertices of a patricia tree inside *another* patricia tree. Which
+seems like a stupid thing to do under most circumstances.
And what is being represented
is itself inherently not self terminating.
-The leaves of the patricia tree represent a data structure
+The leaves of the patricia tree represent a data structure
whose *index* is in the patricia tree, the index being a fixed length set of fields,
-each field itself a self terminating string of bits. Thus the leaf is not a
+each field itself a self terminating string of bits. Thus the leaf is not a
patricia vertex, but an object of a different kind. (Unless of course the
index fully represents all the information of the object)
The links inside a vertex also represent a short string of bits,
the bits that the vertex pointed to has in addition to the bits
-that vertex point already has.
+that vertex point already has.
### Merkle dag
We intend to have a vast Merkle dag, and a vast collection of immutable
@@ -116,7 +116,7 @@ $10$ being unary for a one bit wide field, and $0$ being that one bit, indicatin
two bytes. Thus is represented by the integer itself $+0\text{x}\,8000$
in big endian format.
-There are intrinsics and efficient library code to do endian conversions. We want
+There are intrinsics and efficient library code to do endian conversions. We want
big endian format for bytestring sort order to sort correctly in the tree.
bits.h declares the gcc intrinsics:\
@@ -180,7 +180,7 @@ zero bit, representing the sign, and a leading one bit, being inverted unary
for a zero width count field
Thus the representation for a signed integer $n$ in the range $-2^6\le n \lt 2^6$
-is the one byte signed integer itself $\oplus 0\text{x}\,80$
+is the one byte signed integer itself $\oplus 0\text{x}\,80$
A two byte value, representing signed integers in the range $2^6\le n \lt 2^{12}$ starts with the bits $1\,10\, 0$, 10 being unary for field one bit long,
as for unsigned integers
@@ -217,7 +217,7 @@ The patricia edges of this table live in the same table, just
their values are distinguished from actual leaf values.
Variable length bitstrings are represented as variable
-length bytestrings by appending a one bit followed by
+length bytestrings by appending a one bit followed by
zero to seven zero bits.
In the table we may compress the end values by discarding
diff --git a/docs/scale_clients_trust.md b/docs/scale_clients_trust.md
index db68c51..a5232e4 100644
--- a/docs/scale_clients_trust.md
+++ b/docs/scale_clients_trust.md
@@ -199,7 +199,7 @@ unintentionally scams. All the zk-snark coins are doing the step from a set
$N$ of valid coins, valid unspent transaction outputs, to set $N+1$, in the
old fashioned Satoshi way, and sprinkling a little bit of zk-snark magic
privacy pixie dust on top (because the task of producing a genuine zeek
-proof of coin state for step $N$ to step $N+1$ is just too big for them).
+proof of coin state for step $N$ to step $N+1$ is just too big for them).
Which is, intentionally or unintentionally, a scam.
Not yet an effective solution for scaling the blockchain, for to scale the
diff --git a/docs/setup/contributor_code_of_conduct.md b/docs/setup/contributor_code_of_conduct.md
index a053097..95abca2 100644
--- a/docs/setup/contributor_code_of_conduct.md
+++ b/docs/setup/contributor_code_of_conduct.md
@@ -121,10 +121,10 @@ Thus all changes should be made, explained, and approved by persons
identified cryptographically, rather than through the domain name system.
## setting up automatic git signing of commits
-
+
Suppose you choose the nym "`gandalf`".
(First make sure that no one else is using your nym by glancing at the
- `.gitsigners` file, which should be in sorted order, and if it is not,
+ `.gitsigners` file, which should be in sorted order, and if it is not,
run the linux sort command on it)
then at the root of your repository
@@ -140,7 +140,7 @@ git config include.path ../.gitconfig #sets various defaults, ssh signing among
Then add\
`gandalf ssh-ed25519 «your-public-key-as-in-gandalf.pub»`\
to the .gitsigners file to publish your public key to anyone
- who wants to make sure that commits are from the nym that they
+ who wants to make sure that commits are from the nym that they
claim to be -- at least claim to be when their commits are
displayed by the git aliases of `.gitconfig`
@@ -256,7 +256,7 @@ if you add the recommended repository configuration defaults to your local repos
git config --local include.path ../.gitconfig
```
-This will implement signed commits and will insist that you have `gpg` on your path,
+This will implement signed commits and will insist that you have `gpg` on your path,
and that you have configured a signing key in your local config.
This may be inconvenient if you do not have `gpg` installed and set up.
@@ -288,14 +288,14 @@ No one ever used it in the intended manner.
The web of trust model was written around email, to protect against physhing and
spearphysh attacks. And who uses email for discussions and coordination these days?
-That was useful in back in the days when when everything important was happening
+That was useful in back in the days when when everything important was happening
on mailing lists like the cypherpunks mailing list. But even back in the day
the web of trust model had too many moving parts to be very useful. In
practice people only used Zooko identity, and Web of Trust was a cloud
of confusing complexity and user hostile interface on top of Zooko identity.
What gpg identity is primarily used for in practice is to make sure you
-are getting the latest release from the same repository managed by the same person as
-you got the previous release - which is Zooko identity, not Web of Trust
+are getting the latest release from the same repository managed by the same person as
+you got the previous release - which is Zooko identity, not Web of Trust
identity, and has no real relationship to email. Zooko identity is about
constancy of identity, Web of Trust is about rightful use of email
addresses. Web of trust was a true names mechanism, and today no one
@@ -303,5 +303,5 @@ speaks the truth under their true name.
Web of trust was designed for a high trust society - but in a high trust
society you don't need it, and in a low trust society, the name servers were
-too vulnerable to enemy action, and died, leaving the Web of Trust user
+too vulnerable to enemy action, and died, leaving the Web of Trust user
interface in every installed copy of gpg a useless obstacle.
diff --git a/docs/setup/core_lightning_in_debian.md b/docs/setup/core_lightning_in_debian.md
index b468d90..a204082 100644
--- a/docs/setup/core_lightning_in_debian.md
+++ b/docs/setup/core_lightning_in_debian.md
@@ -85,7 +85,7 @@ docker, and because of python install hell, I will want
to use plugins that live inside docker, with which I will
interact using a lightning-cli that lives outside docker.
-But trouble is docker comes with a pile of scripts and plugins, and
+But trouble is docker comes with a pile of scripts and plugins, and
the coupling between these and the docker image is going to need
a PhD in dockerology.
@@ -158,7 +158,7 @@ You now have your own python environment, which you activate with the command
source my_env/bin/activate
```
-Within this environment, you no longer have to use python3 and pip3, nor should you use them.
+Within this environment, you no longer have to use python3 and pip3, nor should you use them.
You just use python and pip, which means that all those tutorials
on python projects out there on the web now work
diff --git a/docs/setup/set_up_build_environments.md b/docs/setup/set_up_build_environments.md
index e4b959d..3f7bff0 100644
--- a/docs/setup/set_up_build_environments.md
+++ b/docs/setup/set_up_build_environments.md
@@ -120,7 +120,7 @@ To install guest additions on Debian:
```bash
sudo -i
-apt-get -qy update && apt-get -qy install build-essential module-assistant
+apt-get -qy update && apt-get -qy install build-essential module-assistant
apt-get -qy install git dnsutils curl sudo dialog rsync zstd avahi-daemon nfs-common
apt-get -qy full-upgrade
m-a -qi prepare
@@ -306,7 +306,7 @@ The same as for Debian, except that the desktop addition lacks openssh-server, i
sudo apt install openssh-server.
```
-Then ssh in
+Then ssh in
### Guest Additions
@@ -392,7 +392,7 @@ If you have a separate boot partition in an `efi `system then the `grub.cfg` in
should look like
```terminal_image
-search.fs_uuid «8943ba15-8939-4bca-ae3d-92534cc937c3» boot hd0,gpt«4»
+search.fs_uuid «8943ba15-8939-4bca-ae3d-92534cc937c3» boot hd0,gpt«4»
set prefix=($boot)'/grub'
configfile $prefix/grub.cfg
```
@@ -448,7 +448,7 @@ different (larger size)
Confusingly, the documentation and the UI does not distinguish between
dynamic and fixed sized virtual disks - so the UI to change a fixed sized
-disks size, or to copy it to a disk of different size is there, but has
+disks size, or to copy it to a disk of different size is there, but has
absolutely no effect.
Having changed the virtual disk size in the host system, you then want to
@@ -466,7 +466,7 @@ similarly for any other linux file system partitions)
You run `zerofree`, like gparted, in another in a guest OS, that is mounted
on `/dev/sda` while the disk whose partitions you are zeroing is attached,
-but not mounted, as `/dev/sdb1`.
+but not mounted, as `/dev/sdb1`.
You can then shrink it in the host OS with
@@ -517,7 +517,7 @@ sdb disk 20G
├─sdb1 part 33M E470-C4BA vfat
├─sdb2 part 3G 764b1b37-c66f-4552-b2b6-0d48196198d7 swap
└─sdb3 part1 7G efd3621c-63a4-4728-b7dd-747527f107c0 ext4
-sr0 rom 1024M
+sr0 rom 1024M
root@example.com:~# mkdir -p /mnt/sdb2
root@example.com:~# mount /dev/sdb2 /mnt/sdb2
root@example.com:~# ls -hal /mnt/sdb2
@@ -755,7 +755,7 @@ To force your local client to employ passwords:
ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no -o StrictHostKeyChecking=no root@«server»
```
-And then the first thing you do on the freshly initialized server is
+And then the first thing you do on the freshly initialized server is
```bash
apt update -qy
@@ -775,7 +775,7 @@ Putty is the windows ssh client, but you can use the Linux ssh client in
windows in the git bash shell, which is way better than putty, and the
Linux remote file copy utility `scp` is way better than the putty utility
`PSFTP`, and the Linux remote file copy utility `rsync` way better than
-either of them, though unfortunately `rsync` does not work in the windows bash shell.
+either of them, though unfortunately `rsync` does not work in the windows bash shell.
The filezilla client works natively on both windows and linux, and it is very good gui file copy utility that, like scp and rsync, works by ssh (once you set up the necessary public and private keys.) Unfortunately on windows, it insists on putty format private keys, while the git bash shell for windows wants linux format keys.
@@ -990,9 +990,9 @@ differently on a multi user command line system to what it does
in desktop system, which is configured to provide various things
convenient and desirable in a system like a laptop,
but undesirable and inconvenient in a server.
-You should create it as a server,
+You should create it as a server,
and install the desktop system later through the command line,
-over ssh, not through the install system's gui, because the
+over ssh, not through the install system's gui, because the
gui install is going to do mystery stuff behind your back.
Set up the desktop after you have remote access over ssh working
@@ -1020,14 +1020,14 @@ ufw allow 3389
ufw reload
```
-This does not result in, or even allow, booting into
+This does not result in, or even allow, booting into
mate desktop, because it does not supply the lightdm, X-windows
and all that. It enables xrdp to run the mate desktop remotely
xrdp has its graphical login manager in place of lightdm, and does
not have anything to display x-windows locally.
-If you want the option of locally booting int mate desktop you
+If you want the option of locally booting int mate desktop you
also want lightDM and local X11, which is provided by:
```bash
@@ -1600,7 +1600,7 @@ you will need to renew with the `webroot` challenge rather than the `manual`
once DNS points to this server.
This, ` --preferred-challenges dns`, also allows you to set up wildcard
-certificates, but it is a pain, and does not support automatic renewal.
+certificates, but it is a pain, and does not support automatic renewal.
Automatic renewal requires of wildcards requires the cooperation of
certbot and your dns server, and is different for every organization, so only
the big boys can play.
@@ -3471,7 +3471,7 @@ they seem to write it for the Gnome based desktops, Cinnamon and Mate – more
for Mate because it is older and has changed less.
An install program should use `desktop-file-install` which is presumably
-customized for each desktop,
+customized for each desktop,
The autostart directory `$XDG_CONFIG_HOME /autostart' is a part of the
freedesktop.org/XDG [Desktop Application Autostart Specification].
@@ -3631,7 +3631,7 @@ CookieAuthFileGroupReadable 1
DirPort 0
ORPort 0
-```
+```
ControlPort should be closed, so that only applications running on your computer can get to it.
diff --git a/docs/setup/wireguard.md b/docs/setup/wireguard.md
index a8a44a0..6bbc5f5 100644
--- a/docs/setup/wireguard.md
+++ b/docs/setup/wireguard.md
@@ -22,7 +22,7 @@ is impossible to audit with an architecture that seems designed for hiding
obfuscated vulnerabilities, and NSA has contributed much to its codebase
through innumerable proxies, who are evasive when I talk to them.
-Wireguard uses cryptographic libraries developed by our allies, rather than our enemies.
+Wireguard uses cryptographic libraries developed by our allies, rather than our enemies.
Wireguard is lightweight and fast, blowing OpenVPN out of the water.
@@ -36,7 +36,7 @@ Linux, BSD, macOS, Windows, Android, iOS, and OpenWRT.
User authentication is done by exchanging public keys, similar to SSH keys.
-Assigns static tunnel IP addresses to VPN clients.
+Assigns static tunnel IP addresses to VPN clients.
Mobile devices can switch between Wi-Fi and mobile network seamlessly
without dropping any connectivity.
@@ -108,7 +108,7 @@ privacy policies, but apply appropriate cynicism. Political alignments and
vulnerability to power matter more that professed good intentions.
We are going to change this when we set up our own nameserver for the
-vpn clients, but if you don't have control, things are likely to get strange.
+vpn clients, but if you don't have control, things are likely to get strange.
You cannot necessarily change your nameservers by editing
`/etc/resolv.conf`, since no end of processes are apt to rewrite that file
@@ -548,10 +548,10 @@ options {
//==========================
// If BIND logs error messages about the
// root key being expired,
- // you will need to update your keys.
+ // you will need to update your keys.
// See https://www.isc.org/bind-keys
//==========================
-
+
dnssec-validation auto;
listen-on-v6 { any; };
@@ -759,7 +759,7 @@ Capturing on 'eth0'
If the WireGuard client can not connect to UDP port 51820 of the server, then you will only see handshake initiation packets. There’s no handshake respsonse.
```terminal_image
-Capturing on 'eth0'
+Capturing on 'eth0'
1 105.092578905 11.22.33.44 → 12.34.56.78 WireGuard 190 Handshake Initiation, sender=0x3F1A04AB
2 149.670118573 11.22.33.44 → 12.34.56.78 WireGuard 190 Handshake Initiation, sender=0x7D584974
3 152.575188680 11.22.33.44 → 12.34.56.78 WireGuard 190 Handshake Initiation, sender=0x8D2407B9
@@ -870,7 +870,7 @@ This is apt to be easier, because it is likely to be hard to transfer informatio
QRencode is very useful for transferring data to android systems, which tend to be locked down against ordinary users transferring computer data.
-## Configure VPN Client on Windows
+## Configure VPN Client on Windows
Download the [WireGuard installer for Windows](https://www.wireguard.com/install/).
diff --git a/docs/writing_and_editing_documentation.md b/docs/writing_and_editing_documentation.md
index 8a3c95e..b1ce7b7 100644
--- a/docs/writing_and_editing_documentation.md
+++ b/docs/writing_and_editing_documentation.md
@@ -20,7 +20,7 @@ To convert Pandoc markdown to its final html form, invoke `Pandoc` by the bash
shell file `./mkdoc.sh`, which generates html.
Directories with markdown files contain a shell script mkdocs.sh which compiles
-them into html, and also executes the mkdocs.sh of each of its
+them into html, and also executes the mkdocs.sh of each of its
subdirectories, if they have a mkdocs.sh
The directories also contain a file called navbar, which
@@ -290,7 +290,7 @@ This is a useless feature present because people are moving legacy documents to
but an inline footnote appears at the bottom of the document, which is confusing in html
unless the document is short.
-Footnotes were designed to be used with paginated^[footnotes are an ancient hangover from paginated documents in order to do hypertext on paper] documents.
+Footnotes were designed to be used with paginated^[footnotes are an ancient hangover from paginated documents in order to do hypertext on paper] documents.
Html is designed for hypertext.
[^long]: This is an out of line footnote, that can be inserted anywhere, which is confusing in an html document, and hard to edit, because they always wind up sequentially numbered at the bottom of the document.