1
0
forked from cheng/wallet

Updated to current pandoc format

Which affected all documentation files.
This commit is contained in:
reaction.la 2022-05-07 12:49:33 +10:00
parent 84847ffdcd
commit 362b7e653c
No known key found for this signature in database
GPG Key ID: 99914792148C8388
49 changed files with 417 additions and 154 deletions

View File

@ -1,7 +1,7 @@
---
# notmine
title: How could regulators successfully introduce Bitcoin censorship and other dystopias
---
...
[Original document](https://juraj.bednar.io/en/blog-en/2020/11/12/how-could-regulators-successfully-introduce-bitcoin-censorship-and-other-dystopias/) by [Juraj Bednar](https://juraj.bednar.io/en/juraj-bednar-2/)
Publishing this a violation of copyright. Needs to be summarized and paraphrased.

View File

@ -1,6 +1,6 @@
---
title: Blockchain Scaling
---
...
A blockchain is an immutable append only ledger, which ensures that
everyone sees the same unchanging account of the past. A principal
purpose of blockchain technology is to track ownership of assets on a

View File

@ -1,6 +1,6 @@
---
title: Block chain structure on disk.
---
...
The question is: One enormous SQLite file, or actually store the chain as a collection of files?

View File

@ -1,7 +1,7 @@
---
# katex
title: Blockdag Consensus
---
...
# The problem

View File

@ -3,7 +3,7 @@
# notmine
title: >-
Practical Byzantine Fault Tolerance
---
...
::: centre
Appears in the Proceedings of the Third Symposium on Operating Systems Design and Implementation, New Orleans, USA, February 1999

View File

@ -1,6 +1,11 @@
---
title: Client Server Data Representation
---
...
# related
[Replacing TCP, SSL, DNS, CAs, and TLS](replacing_TCP.html){target="_blank"}
# clients and hosts, masters and slaves
A slave does the same things for a master as a host does for a client.
@ -1059,14 +1064,12 @@ How big a hash code do we need to identify the shared secret? Suppose we generat
# Message UDP protocol for messages that fit in a single packet
When I look at [the existing TCP state machine]
(https://www.ietf.org/rfc/rfc0793.txt), it is hideously complicated. Why
am I thinking of reinventing that? [Cookies]
(http://cr.yp.to/syncookies.html) turn out to be less tricky than I
thought the server just sends a secret short hash of the client data and
the server response, which the client cannot predict, and the client
response to the server response has to be consistent with that secret
short hash.
When I look at [the existing TCP state machine](https://www.ietf.org/rfc/rfc0793.txt), it is hideously
complicated. Why am I thinking of reinventing that? [Syn cookies](http://cr.yp.to/syncookies.html) turn out
to be less tricky than I thought the server just sends a secret short hash of
the client data and the server response, which the client cannot predict, and
the client response to the server response has to be consistent with that
secret short hash.
Well, maybe it needs to be that complicated, but I feel it does not. If I find that it really does need to be that complicated, well, then I should not consider re-inventing the wheel.

View File

@ -1,7 +1,7 @@
---
title:
Contracts on the blockchain
---
...
# Terminology
A rhocoin is an unspent transaction output, and it is the public key

View File

@ -1,6 +1,6 @@
---
title: Contributor Code of Conduct
---
...
# Peace on Earth to all men of good will

View File

@ -1,6 +1,6 @@
---
title: Crypto currency
---
...
The objective is to implement the blockchain in a way that scales to one hundred thousand transactions per second, so that it can replace the dollar, while being less centralized than bitcoin currently is, though not as decentralized as purists would like, and preserving privacy better than bitcoin now does, though not as well as Monaro does. It is a bitcoin with minor fixes to privacy and centralization, major fixes to client host trust, and major fixes to scaling.

View File

@ -5,7 +5,7 @@ description: >-
robots: 'index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1'
title: >-
Eric Hughes: A Cypherpunks Manifesto
---
...
**Privacy is necessary for an open society in the electronic age. Privacy is not secrecy. A private matter is something one doesnt want the whole world to know, but a secret matter is something one doesnt want anybody to know. Privacy is the power to selectively reveal oneself to the world.**
![The following essay was written by Eric Hughes and published on March 9, 1993. A Cypherpunks Manifesto was originally published on activism.net](./eric.jpg "Eric Hughes: A Cypherpunks Manifesto"){width="100%"}

View File

@ -1,6 +1,6 @@
---
title: Install Dovecot on Debian 10
---
...
# Purpose
We want postfix working with Dovecot so that we can send and access our emails from email client such as thunderbird client on another computer.

View File

@ -1,7 +1,6 @@
---
title: Download and build on windows
---
...
You will need an up to date edition of Visual Studio, Git-Bash for
windows, and Pandoc

View File

@ -1,6 +1,6 @@
---
title: Duck Typing
---
...
Assume naming system based on Zookos triangle. At what point should
human readable names with mutable and context dependent meanings be nailed
down as globally unique identifiers?

View File

@ -2,7 +2,7 @@
lang: en
title: Estimating frequencies from small samples
# katex
---
...
# The problem to be solved
Because protocols need to be changed, improved, and fixed from time to

View File

@ -1,6 +1,6 @@
---
title: Generating numbers unpredictable to an attacker
---
...
```default
From: Kent Borg <kentborg@borg.org> 2021-03-30
To: Cryptography Mailing List

View File

@ -1,7 +1,7 @@
---
title:
Identity
---
...
# Syntax and semantics of identity
The problem is, we need a general syntax and semantics to express

View File

@ -1,6 +1,6 @@
---
title: How to Save the World
---
...
I have almost completed an enormous design document for an uncensorable social network intended to contain a non evil scalable proof of stake currency, and I have a wallet that can generate secrets, but the wallet is missing no end of critical features it is pre-pre alpha. When it is early pre alpha, I am going to publish it on Gitea, and call for assistance.
Here is a link to one version of the [white paper](social_networking.html), focusing primarily on social media. (But though information wants to be free, programmers need to get paid.)

View File

@ -1,6 +1,6 @@
---
title: Libraries
---
...
# Wireguard, Tailwind, and identity

View File

@ -1,59 +1,6 @@
---
title: Building the project and its libraries in Visual Studio
---
# General instructions
At present the project requires the environment to be set up by hand, with a
lot of needed libraries separately configured and separately built.
We need to eventually make it so that it is one git project with submodules
which can be build with one autotools command with submodules, and one visual
studio command with subprojects, so that
```bash
git clone --recursive git://example.com/foo/rhocoin.git
cd rhocoin
devenv rhocoin.sln /build
```
will build all the required libraries.
And similarly we want autotools to build all the submodules
```bash
git clone --recursive git://example.com/foo/rhocoin.git
cd rhocoin
./configure; make && make install
```
so that the top level configure and make does the `./configure; make && make install` in each submodule.
which might also create a deb file that could be used in
```bash
apt-get -qy install ./rhocoin.deb
```
But we are a long way from being there yet. At present the build environment
is painfully hand made, and has to be painfully remade every time some updates
a library on which it relies.
To build in Visual Studio under Microsoft windows
- Set the environment variable `SODIUM` to point to the Libsodium directory containing the directory `src/libsodium/include` and build the static linking, not the DLL, library following the libsodium instructions.
- Set the environment variable `WXWIN` to point to the wxWidgets directory containing the directory `include/wx` and build the static linking library from the ide using the provided project files
If you are building this project using the Visual Studio ide, you should use the ide to build the libraries, and if you are building this project using the makefiles, you should use the provided make files to build the libraries. In theory this should not matter, but all too often it does matter.
When building libsodium and wxWidgets in Visual Studio, have to retarget the
solution to use the current Microsoft libraries, retarget to use x64, and
change the code generation default in every project versions from
Multithreaded Dll to Multithreaded
Sqlite is not incorporated as an already built library, but as source code.,
as the sqlite3 amalgamation file, one very big C file.
...
# Instructions for wxWidgets
## Setting wxWidgets project in Visual Studio

View File

@ -1,7 +1,7 @@
---
title:
C++ Automatic Memory Management
---
...
# Memory Safety
Modern, mostly memory safe C++, is enforced by:\

View File

@ -1,6 +1,6 @@
---
title: C++ Multithreading
---
...
Computers have to handle many different things at once, for example
screen, keyboard, drives, database, internet.

View File

@ -1,6 +1,6 @@
---
title: Git Bash undocumented command line
---
...
git-bash is a `mintty.exe` wrapper and bash wrapper it winds up invoking
other processes that do the actual work. While git-bash.exe is undocumented, `mintty.exe` and [`bash.exe`](https://www.gnu.org/software/bash/manual/bash.html) [are documented](http://www.gnu.org/gethelp/).

View File

@ -1,6 +1,6 @@
---
title: Review of Cryptographic libraries
---
...
# Noise Protocol Framework

View File

@ -1,6 +1,6 @@
---
title: Scripting
---
...
Initially we intend to implement human to human secret messaging, with
money that can be transferred in the message, and the capability to make

View File

@ -1,7 +1,7 @@
---
title:
Lightning Layer
---
...
# This discussion of the lightning layer may well be obsoleted
by the elegant cryptography of [Scriptless Scripts] using adaptive Schnorr

View File

@ -2,7 +2,7 @@
title:
Merkle-patricia Dac
# katex
---
...
# Definition
## Merkle-patricia Trees
@ -113,7 +113,7 @@ integer, and it contains four records, with oids 2, 4, 5, and 6. The
big endian representation of those primary keys 0b010, 0b100,
0b101, and 0b111
The resulting patricia tree is:
The resulting patricia tree with infix keys is:
<svg
xmlns="http://www.w3.org/2000/svg"
@ -405,19 +405,22 @@ particular part of the blockchain is valid. This has three
advantages over the chain structure.
1. A huge problem with proof of stake is "nothing at stake".
There is nothing stopping the peers from pulling a whole
new history out of their pocket.\
With this data structure, there is something stopping them.
They cannot pull a brand new history out of their pocket,
because the clients have a collection of very old roots of very
large balanced binary merkle trees of blocks. They keep the
hash paths to all their old transactions around, and if the
peers invent a brand new history, they find that the context
of all their old transactions has changed.
1. It protects clients against malicious peers, since any claim
the peer makes about the total state of the blockchain can
be proven with $\bigcirc(\log_2n)$ hashes.
1. If a block gets lost or corrupted that peer can identify that one specific block that is a problem. Peers have to reload down, or at least re-index, the entire blockchain far too often.
There is nothing stopping the peers from pulling a whole
new history out of their pocket.\
With this data structure, there is something stopping them. They
cannot pull a brand new history out of their pocket, because the
clients have a collection of very old roots of very large balanced
binary merkle trees of blocks. They keep the hash paths to all their
old transactions around, and if the peers invent a brand new history,
the clients find that the context of all their old transactions has
changed.
1. It protects clients against malicious peers, since any claim the peer
makes about the total state of the blockchain can be proven with
$\bigcirc(\log_2n)$ hashes.
1. If a block gets lost or corrupted that peer can identify that one
specific block that is a problem. At present peers have to download,
or at least re-index, the entire blockchain far too often, and a full
re-index takes days or weeks.
This is not a Merkle-patricia tree. This is a generalization of a Merkle
patricia dag to support immutability.
@ -429,15 +432,17 @@ one of which corresponds to appending a $0$ bit to the bitstring that
identifies the vertex and the path to the vertex, and one of which
corresponds to adding a $1$ bit to the bitstring.
In this dag vertices identfied by bit strings ending in a $0$ bit have a
third link, that links to a vertex whose bit string is truncated back to
the previous $1$ bit, and that trailing $1$ bit zeroed, a shorter bitstring. Thus, whereas in blockchain (Merkle chain) you need $n$ hashes to
reach and prove data $n$ blocks back, in this Merkle patricia dag, you
only need $\bigcirc(\log_2n)$ hashes to reach any vertex of the blockdag.
In an immutable append only Merkle patricia dag vertices identified by bit
strings ending in a $0$ bit have a third hash link, that links to a vertex whose
bit string is truncated back by zeroing the prior the $1$ bit and removing any
$0$ bits following it. Thus, whereas in blockchain (Merkle chain) you need $n$
hashes to reach and prove data $n$ blocks back, in a immutable append
only Merkle patricia dag, you only need $\bigcirc(\log_2n)$ hashes to reach a
vertex of the blockdag $n$ blocks back.
The vertex $0010$ has an extra link back to the vertex $000$, the
vertices $0100$ and $010$ have extra links back to the vertex $00$, the
vertices $1000$, $100$, and $10$ have extra links back to the vertex $0,
vertices $1000$, $100$, and $10$ have extra links back to the vertex $0$,
and so on and so forth.
This enables clients to reach any previous vertex through a chain of

View File

@ -2,7 +2,7 @@
title:
Multisignature
# katex
---
...
To do a Schnorr multisignature, you just list all the signatures that
went into it, and the test key is just adding the all the public keys

View File

@ -1,6 +1,6 @@
---
title: Name System
---
...
We intend to establish a system of globally unique wallet names, to resolve
the security hole that is the domain name systm, though not all wallets will
have globally unique names, and many wallets will have many names.

View File

@ -1,8 +1,7 @@
---
# katex
title: Number encoding
---
...
# The problem to be solved
As computers and networks grow, any fixed length fields

View File

@ -1,6 +1,6 @@
---
title: Parsers
---
...
This rambles a lot. Thoughts in progress: Summarizing my thoughts here at the top.
Linux scripts started off using lexing for parsing, resulting in complex and

View File

@ -1,6 +1,6 @@
---
title: Paxos
---
...
Paxos addresses the arrow theorem, and the difficulty of having a reliable
broadcast channel.

View File

@ -1,14 +1,29 @@
---
lang: en
title: Peering through NAT
---
...
A library to peer through NAT is a library to replace TCP, the domain
name system, SSL, and email.
name system, SSL, and email. This is covered at greater length in
[Replacing TCP](replacing_TCP.html)
# Implementation issues
There is a great [pile of RFCs](./replacing_TCP.html) on issues that arise with using udp and icmp
to communicate.
## timeout
The NAT mapping timeout is officially 20 seconds, but I have no idea
what this means in practice. I suspect each NAT discards port mappings
according to its own idiosyncratic rules, but 20 seconds may be a widely respected minimum.
The official maximum time that should be assumed is two minutes, but
this is far from widely implemented, so keep alives often run faster.
Minimum socially acceptable keep alive time is 15 seconds. To avoid
synch loops, random jitter in keep alives is needed. This is discussed at
length in [RFC2450](https://datatracker.ietf.org/doc/html/rfc5405)
An experiment on [hole punching] showed that most NATs had a way
longer timeout, and concluded that the way to go was to just repunch as
needed. They never bothered with keep alive. They also found that a lot of
@ -66,7 +81,16 @@ But if our messages are reasonably short and not terribly frequent, as client me
STUN and ISO/IEC 29341 are incomplete, and most libraries that supply implementations are far too complete you just want a banana, and you get the entire jungle.
Ideally we would like a fake or alternative TCP session setup, and then you get a regular standard TCP connection on a random port, assuming that the target machine has that service running, and the default path for exporting that service results in window with a list of accessible services, and how busy they are. Real polish would be hooking the domain name resolution so that names in the peer top level domain return a true IP, and and then intercepts TCP session setup for that IP so that it will result in TCP session setup going through the NAT penetration mechanism if the peer is behind a NAT. One can always install ones own OSI layer three or layer two, as a vpn does or the host for a virtual machine. Intercept the name lookup, and then tell the layer three to do something special when a tcp session is attempted on the recently acquired IP address, assuming the normal case where an attempt to setup a TCP session on an IP address follows very quickly after a name lookup.
Ideally we would like a fake or alternative TCP session setup, using raw
sockets and then you get a regular standard TCP connection on a random
port, assuming that the target machine has that service running, and the
default path for exporting that service results in window with a list of
accessible services, and how busy they are. Real polish would be hooking
the domain name resolution so that looking up the names in the peer top
level domain create a a hole, using fake TCP packets sent through a raw
socket. then return the ip of that hole. One might have the hole go through
wireguard like network interface, so that you can catch them coming and
going.
Note that the internet does not in fact use the OSI model though everyone talks as if it did. Internet layers correspond only vaguely to OSI layers, being instead:
@ -76,7 +100,9 @@ Note that the internet does not in fact use the OSI model though everyone talks
4. Transport
5. Application
And I have no idea how one would write or install ones own network or transport layer, but something is installable, because I see no end of software that installs something, as every vpn does.
And I have no idea how one would write or install ones own network or
transport layer, but something is installable, because I see no end of
software that installs something, as every vpn does, wireguard being the simplest.
------------------------------------------------------------------------

View File

@ -1,7 +1,7 @@
---
title:
Petname Language
---
...
Many different cryptographic identifiers get a petname, but the primary one of thought and concern is petnames for Zooko identifiers.
A Zooko identifier arrives as display name, nickname, public key, and signature binding the display name and nickname to the public key.

View File

@ -1,7 +1,7 @@
---
title:
Proof of Stake
---
...
::: {style="background-color : #ffdddd; font-size:120%"}
![run!](tealdeer.gif)[TL;DR Map a blockdag algorithm equivalent to the
Generalized MultiPaxos Byzantine

View File

@ -1,7 +1,7 @@
---
title: Recognizing categories, and recognizing particulars as forming a category
# katex
---
...
This is, of course, a deep unsolved problem in philosophy.
However, it seems to be soluble as computer algorithm. Programs that do

View File

@ -1,9 +1,117 @@
---
title: Replacing TCP, SSL, DNS, CAs, and TLS
---
...
# related
[Client Server Data Representation](client_server.html){target="_blank"}
# Existing work
[µTP]:https://github.com/bittorrent/libutp
"libutp - The uTorrent Transport Protocol library"
{target="_blank"}
[µTP], Micro Transport Protocol has already been written, and it is just a
matter of copying it and embedding it where possible, and forking it if
unavoidable. DDOS resistance looks like it is going to need forking.
Implementing consensus over [µTP] is going to need [QUIC] style streams,
that can slow down or fail without the whole connection slowing down or
failing, though it might be easier to implement consensus that just calls
µTP for some tasks.
[BEP0055]:https://www.bittorrent.org/beps/bep_0055.html
"BEP0055"
{target="_blank"}
[`ut_holepunch` extension message]:http://bittorrent.org/beps/bep_0010.html
"BEP0010"
{target="_blank"}
[libtorrent source code]:https://github.com/arvidn/libtorrent/blob/c1ade2b75f8f7771509a19d427954c8c851c4931/src/bt_peer_connection.cpp#L1421
"bt_peer_connection.cpp"
{target="_blank"}
µTP does not itself implement hole punching, but interoperates smoothly
with libtorrents's [BEP0055]'s [`ut_holepunch` extension message], which is
only documented in [libtorrent source code].
A tokio-rust based µTP system is under development, but very far from
complete last time I looked. Rewriting µTP in rust seems pointless. Just
call it from a single tokio thread that gives effect to a hundred thousand
concurrent processes. There are several projects afoot to rewrite µTP in
rust, all of them stalled in a grossly broken and incomplete state.
[QUIC has grander design objectives]:https://docs.google.com/document/d/1RNHkx_VvKWyWg6Lr8SZ-saqsQx7rFV-ev2jRFUoVD34/edit
{target="_blank"}
[QUIC has grander design objectives],and is a well thought out, well
designed, and well tested implementation of no end of very good and
much needed ideas and technologies, but relies heavily on enemy
controlled cryptography.
Albeit there are some things I want to do, consensus between a small
number of peers, by invitation and each peer directly connected to each of
the others, the small set of peers being part of the consensus known to all
peers, and all peers always online and responding appropriately, or els
they get kicked out. (Practical Byzantine Fault *In*tolerant consensus)
which it really cannot do, though it might be efficient to use a different
algorithm to construct consensus, and then use µTP to download the bulk data.
# Existing documentation
There is a great pile of RFCs on issues that arise with using udp and icmp
to communicate, which contain much useful information.
[RFC5405](https://datatracker.ietf.org/doc/html/rfc5405#section-3), [RFC6773](https://datatracker.ietf.org/doc/html/rfc6773), [datagram congestion control](https://datatracker.ietf.org/doc/html/rfc5596), [RFC5595](https://datatracker.ietf.org/doc/html/rfc5595), [UDP Usage Guideline](https://datatracker.ietf.org/doc/html/rfc8085)
There is a formalized congestion control system `ECN` explicit congestion
control. Most severs ignore ECN. On a small proportion of routes, 1%,
ECN tagged packets are dropped
Raw sockets provide greater control than UDP sockets, and allow you to
do ICMP like things through ICMP.
I also have a discussion on NAT hole punching, [peering through nat](peering_through_nat.html), that
summarizes various people's experience.
To get an initial estimate of the path MTU, connect a datagram socket to
the destination address using connect(2) and retrieve the MTU by calling
getsockopt(2) with the IP_MTU option. But this can only give you an
upper bound. To find the actual MTU, have to have a don't fragment field
(which is these days generally set by default on UDP) and empirically
track the largest packet that makes it on this connection. Which TCP does.
## first baby steps
To try and puzzle this out, I need to build a client server that can listen on
an arbitrary port, and tell me about the messages it receives, and can send
messages to an arbitrary hostname:port or network address:port, and
which, when it receives a packet that is formatted for it, will display the
information in that packet, and obey the command in that packet, which
will typically be a command to send a reply that depicts what is in the
packet it received, which probably got transformed by passing through
multiple nats, and/or a command to display what is in the packet, which is
typically a depiction of how the packet to which this packet is a reply got
transformed
This test program sounds an awful lot like ICMP, which is best accessed
through raw sockets. Might be a good idea to give it the capability to send
ICMP, UDP, and fake TCP.
Raw sockets provide the lowest level access to the network available from
userspace. An immense pile of obscure and complicated stuff is in kernel.
# What the API should look like
It should be a message api, not a connection api.
It should be a consensus API for consensus among a small number of
peers, rather than message API, message response being the special case
of consensus between two peers, and broad consensus being constructed\
out of a large number of small invitation based consensi.
A peer explicitly joins the small group when its request is acked by a
majority, and rejected by no one.
On the other hand this involves re-inventing networking from scratch, as
compared to simply copying http/2, or some other reliable UDP system.
@ -210,6 +318,129 @@ security market, up to no good with enormous amounts of money.
A cryptocurrency with a name system could eat their lunch, greatly enriching
its founders in the process.
# Networking itself is broken
But that is too hard a problem to fix.
I had to sweat hard setting up Wireguard, because it pretends to be just
another `network adaptor` so that it can sweep away a pile of issues as out
of scope, and reading up posts and comments referencing these issues, I
suspect that almost no one understands these issues, or at least no one who
understands these issues is posting about them. They have a magic
incomprehensible incantation which works for them in their configuration,
and do not understand why it does not work for someone else in a subtly
different configuration.
## Internet protocol too many layer of abstraction
I have to talk internet protocol to reach other systems over the internet, but
internet protocol is a messy pile of ad hoc bits of software built on top of
ad hoc bits of software, and the reason it is hard to understand the nuts and
bolts when you actually try to do anything useful is that you do not
understand, and indeed almost no one understands, what is actually going
on at the level of network adaptors and internet switches. When you send a
udp packet, you are already at a high level of abstraction, and the
complexity that these abstractions are intended to hide leaks.
And because you do not understand the intentionally hidden complexity
that is leaking, it bites you.
### Adaptors and switches
A private network consists of a bunch of `network adaptors` all connected to
one `ethernet switch` and its configuration consists of configuring
the software on each particular computer with each particular `network adaptor`
to be consistent with the configuration of each of the others connected to
the same `ethernet switch`, unless you have a `DHCP server` attached to the
network, in which case each of the machines gets a random, and all too
often changing, configuration from that `DHCP server`, but at least it is
guaranteed to be consistent with the configuration of each of the other
`network adaptors` attached to that one `ethernet switch`. Why do DHCP
configurations not live forever, why do they not acknowledge the machine
human readable name, why does the ethernet switch not have a human
readable name, and why does the DHCP server have a network address
related to that of the ethernet switch, but not a human readable name
related to that of the ethernet switch?
What happens when you have several different network adaptors in one computer?
Obviously an IP address range has to be associated with each network
adaptor, so that the computer can dispatch packets to the correct adaptor.
And when the network adaptor receives a packet, the computer has to
figure out what to do with it. And what it does with it is the result of a pile
of undocumented software executing a pile of undocumented scripts.
If you manually configure each particular machine connected to an
ethernet switch, the configuration consists of arcane magic formulae
interpreted by undocumented software that differs between one system and the next.
As rapidly becomes apparent when you have to deal with more than one
adaptor, connected to more than one switch.
Each physical or virtual network adaptor is driven by a device driver,
which is different for each physical device and operating system. From the
point of view of the software, the device driver api *is* the network adaptor
programmer interface, and it does not care about which device driver it is,
so all network adaptors must have the same programmer interface. And
what is that interface?
Networking is a wart built on top of warts built on top of warts. IP6 was
intended to clean up this mess, but kind of collapsed under rule by
committee, developing a multitude of arcane, overly complicated, and overly
clever cancers of its own, different from, and in part incompatible
with, the vast pile of cruft that has grown on top of IP4.
The committee wanted to throw away the low order sixty four bits of
address space to use to post information for the NSA to mop up, and then
other people said to themselves, "this seems like a useless way to abuse
the low order sixty four bits, so let us abuse it for something else. After all,
no one is using it, nor can they use it because it is being abused". But
everyone whose internet facing host has been assigned a single address,
which means has actually been assigned $2^{64}$ addresses because he has
sixty four bits of useless address space, needs to use it, since he probably
wants to connect a private in house network through his single internet
facing host, and would like to be free to give some of his in house hosts
globally routable addresses.
In which case he has a private network address space, which is a random
subnet of fd::/8, and a 64 bit subnet of the global address space, and what
he wants is that he can assign an in house computer a globally routable
address, whereupon anything it sends that has a destination that is not on
his private network address space, nor his subnet of the globally routable
address space, gets sent to the internet facing network interface.
Further, he would like every computer on his network to be automatically
assigned a globally routable address if it uses a name in the global system,
or a private fd:: address if it is using a name not in the global system, so
that the first time his computer tries to access the network with the domain
name he just assigned, it gets a unique network address which will never
change, and a reverse dns that can only be accessed through an address on
his private network. And if he assigns it a globally accessible name, he
would like the global dns servers and reverse dns servers to automatically
learn that address.
This is, at present, doable by the DDI, which updates both your DHC
server and your DNS server. Except that hardly anyone has an in house
DNS server that serves up his globally routable addresses. The I in DDI
stands for IP Address Manager or IPAM. In practice, everyone relies on
named entities having extremely durable network addresses which are a
pain and a disaster to dynamically update, or they use dynamic DNS, not IPAM.
What would be vastly more useful and usable is that your internet facing
peer routed globally routable packets to and from your private network,
and machines booting up on your private network automatically received
addresses static addresses corresponding their name.
Globally routable subnets can change, because of physical changes in the
global network, but this happens so rarely that a painful changeover is
acceptable. The IP6 fix for automatically accommodating this issue is a
cumbersome disaster, and everyone winds up embedding their globally
routable IP6 subnet address in a multitude of mystery magic incantations,
which, in the event of a change, have to be painstakingly hunted down and
changed one by one, so the IP6 automatic configuration system is just a
great big wart in a dinosaur's asshole. It throws away half the address
space, and seldom accomplishes anything useful.
# Distributed Denial of Service attack
At present, resistance to Distributed Denial of Service attacks rests on

View File

@ -1,7 +1,7 @@
---
# katex
title: running average
---
...
The running average $a_n$ of a series $R_n$

View File

@ -1,6 +1,6 @@
---
title: Scaling, trust and clients
---
...
# Scaling
@ -117,6 +117,8 @@ with both privacy and scaling.
Zk-snarks are not yet a solution. They have enormous potential
benefits for privacy and scaling, but as yet, no one has quite found a way.
[performance survey of zksnarks](https://github.com/matter-labs/awesome-zero-knowledge-proofs#comparison-of-the-most-popular-zkp-systems)
A zk-snark is a succinct proof that code *was* executed on an immense pile
of data, and produced the expected, succinct, result. It is a witness that
someone carried out the calculation he claims he did, and that calculation
@ -203,19 +205,59 @@ Last time I checked, [Cairo] was not ready for prime time.
Maybe it is ready now.
The two basic problems with zk-snarks is that even though a zk-snark
proving something about an enormous data set is quite small and can be
quickly verified by anyone, it requires enormous computational resources to
generate the proof, and how does the end user know that the verification
verifies what it is supposed to verify?
Starkware's [Cairo] now has zk-stark friendly elliptic curve. But they
suggest it is better to prove identity another way, I would assume by
proving that the preimage contains a secret that is the same as another
pre-image contains. For example, that the transaction was prepared from
unspent transaction outputs whose full preimage is a secret known only to
the rightful owners of the outputs.
To solve the first problem, need distributed generation of the proof,
constructing a zk-snark that is a proof about a dag of zk-snarks,
effectively a zk-snark implementation of the map-reduce algorithm for
massive parallelism. In general map-reduce requires trusted shards that
will not engage in Byzantine defection, but with zk-snarks they can be
untrusted, allowing the problem to be massively distributed over the
internet.
Their main use of this zk-stark friendly elliptic curve is to enable recursive
proofs of verification, hash based proof of elliptic curve based proofs.
[pre-image of a hash]:https://berkeley-desys.github.io/assets/material/lec5_eli_ben_sasson_zk_stark.pdf
An absolutely typical, and tolerably efficient, proof is to prove one knows
the [pre-image of a hash] And then, of course, one wants to also prove
various things about what is in that pre-image
I want to be able to prove that the [pre-image of a hash has certain
properties, among them that it contains proofs that I verified that the
pre-image of hashes contained within it have certain properties](https://cs251.stanford.edu/lectures/lecture18.pdf).
[Polygon]:https://www.coindesk.com/tech/2022/01/10/polygon-stakes-claim-to-fastest-zero-knowledge-layer-2-with-plonky2-launch/
[Polygon] with four hundred million dollars in funding, claims to have
accomplished this
[Polygon] is funding a variety of zk-snark intiatives, but the one that claims
to have recursive proofs running over a Merkle root is [Polygon zero](https://blog.polygon.technology/zkverse-polygons-zero-knowledge-strategy-explained/),
which claims:
Plonky2 combines the best of STARKs, fast proofs and no trusted
setup, with the best of SNARKs, support for recursion and low
verification cost ...
... transpiled to ZK bytecode, which can be executed efficiently in our VM running inside a STARK.
So, if you have their VM that can run inside a stark, and their ZK
bytecode, you can write your own ZK language to support a friendly
system, instead of an enemy system - a language that can do what we want done,
rather than what our enemies in Ethereum want done.
The key is writing a language that operates on what looks to it like sql
tables, to produce proof that the current state, expressed as a collection of
tables represented as a Merkle Patricia tree, is the result of valid
operations on a collection of transactions, represented as Merkle patricia
tree, that acted on the previous current state, that allows generic
transactions, on generic tables, rather than Ethereum transactions on
Ethereum data structures.
But it is a four hundred million dollar project that is in the pocket of our
enemies. On the other hand, if they put their stuff in Ethereum, then I
should be able to link an allied proof into an enemy proof, producing a
side chain with unlimited side chains, that can be verified from its own
root, or from Ethereum's root.
To solve the second problem, need an [intelligible scripting language for
generating zk-snarks], a scripting language that generates serial verifiers
@ -226,6 +268,10 @@ generating zk-snarks]:https://www.cairo-lang.org
"Welcome to Cairo
A Language For Scaling DApps Using STARKs"
It constructs a byte code that gets executed in a STARK. It is designed to
compile Ethereum contracts to that byte code, and likely our enemies will
fuck the compiler, but hard for them to fuck the byte code.
Both problems are being actively worked on. Both problems need a good deal
more work, last time I checked. For end user trust in client wallets
relying on zk-snark verification to be valid, at least some of the end

View File

@ -1,7 +1,7 @@
---
title:
Set up build environments
---
...
# Virtual Box
To build a cross platform application, you need to build in a cross
@ -1095,7 +1095,7 @@ once DNS points to this server.
But if you are doing this, not on your test server, but on your live server, the easy way, which will also setup automatic renewal and configure your webserver to be https only, is:
```bash
`certbot --nginx -d mail.reaction.la,blog.reaction.la,reaction.la`
certbot --nginx -d mail.reaction.la,blog.reaction.la,reaction.la
```
If instead you already have a certificate, because you copied over your `/etc/letsencrypt` directory

View File

@ -1,7 +1,7 @@
---
title: Sharing the pool
#katex
---
...
Every distributed system needs a shared data pool, if only so that peers can
find out who is around.

View File

@ -1,7 +1,7 @@
---
title:
Social networking
---
...
# the crisis of censorship
We have a crisis of censorship.

View File

@ -1,7 +1,7 @@
---
title: >-
Sox Accounting
---
...
Accounting and bookkeeping is failing in an increasingly low trust world
of ever less trusting and ever less untrustworthy elites, and Sarbanes-Oxley
accounting (Sox) is an evil product of this evil failure.

View File

@ -1,7 +1,7 @@
---
title: Sybil Attack
# katex
---
...
We are able to pretend that bank transactions are instant, because they are
reversible. And then they get reversed for the wrong reasons, often very bad
reasons.

View File

@ -1,7 +1,7 @@
---
title: >-
Triple Entry Accounting
---
...
See [Sox accounting], for why we need to replace Sox accounting with triple entry accounting.
[Sox accounting]:sox_accounting.html

View File

@ -1,7 +1,7 @@
---
title:
The Usury problem
---
...
The Christian concept of usury presupposes that capitalism was divinely
ordained in the fall, and that we are commanded to use capital productively
and wisely.

View File

@ -1,7 +1,7 @@
---
title: >-
Rhocoin White Paper
---
...
This is a preliminary draft, not the final version.
The centre of mass of the world financial system is starting to shift from

View File

@ -1,7 +1,7 @@
---
title: >-
Bitzion: how Bitcoin becomes a state
---
...
It probably wont happen. It probably should.
Statelike nonstates fascinate all political engineers. Can a nonstate

View File

@ -2,8 +2,8 @@
title: >-
Writing and Editing Documentation
# katex
---
# Pandoc Markdown
...
Much documentation is in Pandoc markdown, because easier to write. But html
is easier to read, and allows superior control of appearance
@ -84,7 +84,7 @@ Thus the markdown version of this document starts with:
title: >-
Writing and Editing Documentation
# katex
---
...
```
# Converting html source to markdown source
@ -109,7 +109,7 @@ $$\int \sin(x) dx = \cos(x)$$
$$\sum a_i$$
$$\lfloor{(x+5)÷6}\rfloor = \lceil{(x÷6}\rceil$$
$$\lfloor{(x+5)/6}\rfloor = \lceil{(x/6}\rceil$$
Using Omicron, \bigcirc, not capital O for big \bigcirc. `\Omicron` will not always
Use `\bigcirc`, not capital O for Omicron $\bigcirc$. `\Omicron` will not always
compile correctly, but `\ln` and `\log` is more likely to compile correctly than
`ln` and `log`, which it tends to render as symbols multiplied, rather than one
symbol.
@ -153,7 +153,7 @@ A file that does not need katex has the header:
---
title: >-
Document title
---
...
```
But if it does need katex, it has the header
@ -163,7 +163,7 @@ But if it does need katex, it has the header
title: >-
Document title
# katex
---
...
```
So that the bash script file `./mkdoc.sh` will tell `Pandoc` to find the katex scripts.
@ -295,6 +295,13 @@ takes through all subsequent t points, sometimes pushing the curve
into pathological territory where bezier curves give unexpected and
nasty results.
Sometimes you just have to calculate the average of previous and
following control point, and make it the start point of a new c curve
followed s curves, or a new q curve followed by t curves.
Or if the terminal curve is a given, calculate the prior control point as
twice the start point minus the following control point.
Scalable vector graphics are dimensionless, and the `<svg>` tag's
height, width, and ViewBox properties translate the dimensionless
quantities into pixels. The graphics default to fixed aspect ratio, and

View File

@ -1,7 +1,7 @@
---
title:
Zookos Triangle
---
...
# Zooko Identity
@ -170,7 +170,7 @@ When sending a message, you can reference a guid by the petname
For example, if Bob's petname for John_Smith is John_Smith71 (because he already had John_Smiths one to seventy as petnames on his computer), but Carol's petname for John_Smith is JohnSmith, and Bob sends a message to Carol containing the the text
> remember when @John_Smith71 said ...
> remember when @`JohnSmith71` said ...
then this gets translated on sending to