Merge remote-tracking branch 'origin/docs'

This commit is contained in:
Cheng 2024-02-14 21:19:53 +10:00
commit 76e406424c
No known key found for this signature in database
GPG Key ID: 571C3A9C3B9E6FCA
34 changed files with 2085 additions and 731 deletions

View File

@ -77,11 +77,11 @@ from the same representative sample.
For each peer that could be on the network, including those that have been
sleeping in a cold wallet for years, each peer keeps a running cumulative
total of that peers stake. With every new block, the peers stake is added to
total of that peers shares. With every new block, the peers shares is added to
its total.
On each block of the chain, a peers rank is the bit position of the highest
bit of the running total that rolled over when its stake was added for that
bit of the running total that rolled over when its shares was added for that
block.
*edit note*
@ -94,13 +94,13 @@ Which gives the same outcome, that on average and over time, the total weight wi
*end edit note*
So if Bob has a third of the stake of Carol, and $N$ is a rank that
corresponds to bit position higher than the stake of either of them, then
So if Bob has a third of the shares of Carol, and $N$ is a rank that
corresponds to bit position higher than the shares of either of them, then
Bob gets to be rank $R$ or higher one third as often as Carol. But even if
his stake is very low, he gets to be high rank every now and then.
his he has a very small shareholding, he gets to be high rank every now and then.
A small group of the highest ranking peers get to decide on the next block,
and the likelihood of being a high ranking peer depends on stake.
and the likelihood of being a high ranking peer depends on shares.
They produce the next block by unanimous agreement and joint signature.
The group is small enough that this is likely to succeed, and if they do not,
@ -239,7 +239,7 @@ this, and the system needs to be able to produce a sensible result even if
some peers maliciously or through failure do not generate sequential
signature sequence numbers.
Which, in the event of a fork, will on average reflect the total stake of
Which, in the event of a fork, will on average reflect the total shares of
peers on that fork.
If two prongs have the same weight, take the prong with the most transactions. If they have the same weight and the same number of transactions, hash all the public keys of the signatories that formed the
@ -247,7 +247,7 @@ blocks, their ranks, and the block height of the root of the fork and take
the prong with the largest hash.
This value, the weight of a prong of the fork will, over time for large deep
forks, approximate the stake of the peers online on that prong, without the
forks, approximate the shares of the peers online on that prong, without the
painful cost taking a poll of all peers online, and without the considerable
risk that that poll will be jammed by hostile parties.
@ -444,7 +444,7 @@ I have become inclined to believe that there is no way around making
some peers special, but we need to distribute the specialness fairly and
uniformly, so that every peer get his turn being special at a certain block
height, with the proportion of block heights at which he is special being
proportional to his stake.
proportional to his shares.
If the number of peers that have a special role in forming the next block is
very small, and the selection and organization of those peers is not
@ -554,7 +554,7 @@ while blocks that received the other branch first continue to work on that
branch, until one branch gets ahead of the other branch, whereupon the
leading branch spreads rapidly through the peers. With proof of share, that
is not going work, one can lengthen a branch as fast as you please. Instead,
each branch has to be accompanied by evidence of the weight of stake of
each branch has to be accompanied by evidence of the weight of shares of
peers on that branch. Which means the winning branch can start spreading
immediately.
@ -681,17 +681,17 @@ limit, see:
themselves can commit transactions through the peers, if the clients
themselves hold the secret keys and do not need to trust the peers.
# Calculating the stake of a peer
# Calculating the shares represented by a peer
We intend that peers will hold no valuable or lasting secrets, that all the
value and the power will be in client wallets, and the client wallets with
most of the value, who should have most of the power, will seldom be online.
I propose proof of share. The stake of a peer is not the stake it owns, but
the stake that it has injected into the blockchain on behalf of its clients
and that its clients have not spent yet, or stake that some client wallet
I propose proof of share. The shares of a peer is not the shares it owns, but
the shares that it has injected into the blockchain on behalf of its clients
and that its clients have not spent yet, or shares that some client wallet
somewhere has chosen to be represented by that peer. Likely only the
whales will make a deliberate and conscious decision to have their stake
whales will make a deliberate and conscious decision to have their shares
represented by a peer, and it will be a peer that they likely control, or that
someone they have some relationship with controls, but not necessarily a
peer that they use for transactions.

View File

@ -95,7 +95,7 @@ cryptographic mathematics, but by the fact that our blockchain, unlike
the others, is organized around client wallets chatting privately with
other client wallets. Every other blockchain has necessary cryptographic
mathematics to do the equivalent, usually more powerfull and general
than anything on the rhocoin blockchain, and Monaro has immensely
than anything on the rhocoin blockchain, and Monero has immensely
superior cryptographic capabilities, but somehow, they dont, the
difference being that rhocoin is designed to avoid uses of the internet
that render a blockchain vulnerable to regulation, rather than to use

View File

@ -102,6 +102,11 @@ upper bound. To find the actual MTU, have to have a don't fragment field
(which is these days generally set by default on UDP) and empirically
track the largest packet that makes it on this connection. Which TCP does.
MTU (packet size) and MSS (data size, $MTU-40$) is a
[messy problem](https://www.cisco.com/c/en/us/support/docs/ip/generic-routing-encapsulation-gre/25885-pmtud-ipfrag.html)
Which can be side stepped by always sending packets
of size 576 contiaing 536 bytes of data.
## first baby steps
To try and puzzle this out, I need to build a client server that can listen on

7
docs/design/mkdocs.sh Normal file
View File

@ -0,0 +1,7 @@
#!/bin/bash
set -e
cd `dirname $0`
docroot="../"
banner_height=banner_height:15ex
templates=$docroot"pandoc_templates"
. $templates"/mkdocs.cfg"

View File

@ -8,8 +8,10 @@ name system, SSL, and email. This is covered at greater length in
# Implementation issues
There is a great [pile of RFCs](TCP.html) on issues that arise with using udp and icmp
There is a great [pile of RFCs on issues that arise with using udp and icmp
to communicate.
[Peer-to-Peer Communication Across Network Address Translators]
(https://bford.info/pub/net/p2pnat/){target="_blank"}
## timeout
@ -30,7 +32,7 @@ needed. They never bothered with keep alive. They also found that a lot of
the time, both parties were behind the same NAT, sometimes because of
NATs on top of NATs
[hole punching]:http://www.mindcontrol.org/~hplus/nat-punch.html
[hole punching]:https://tailscale.com/blog/how-nat-traversal-works
"How to communicate peer-to-peer through NAT firewalls"
{target="_blank"}

7
docs/design/navbar Normal file
View File

@ -0,0 +1,7 @@
<div class="button-bar">
<a href="../manifesto/vision.html">vision</a>
<a href="../manifesto/scalability.html">scalability</a>
<a href="../manifesto/social_networking.html">social networking</a>
<a href="../manifesto/Revelation.html">revelation</a>
</div>

615
docs/design/peer_socket.md Normal file
View File

@ -0,0 +1,615 @@
---
# katex
title: >-
Peer Socket
sidebar: false
notmine: false
...
::: myabstract
[abstract:]{.bigbold}
Most things follow the client server model,
so it makes sense to have a distinction between server sockets
and client sockets. But ultimately what we are doing is
passing messages between entities and the revolutionary
and subversive technologies, bittorrent, bitcoin, and
bitmessage are peer to peer, so it makes sense that all sockets,
however created wind up with same properties.
:::
# factoring
In order to pass messages, the socket has to know a whole lot of state. And
in order handle messages, the entity handling the messages has to know a
whole lot of state. So a socket api is an answer to the question how we
factor this big pile of state into two smaller piles of state.
Each big bundle of state represents a concurrent communicating process.
Some of the state of this concurrent communicating process is on one side
of our socket division, and is transparent to one side of our division. The
application knows the internals of the some of the state, but the internals
of socket state are opaque, while the socket knows the internals of the
socket state, but the internals of the application state are opaque to it.
The socket state machines think that they are passing messages of one class
or a very small number of classes, to one big state machine, which messages
contain an opaque block of bytes that application class serializes and
deserializes.
## layer responsibilities
The sockets layer just sends and receives arbitrary size blocks
of opaque bytes over the wire between two machines.
They can be sent with or without flow control
and with or without reliability,
but if the block is too big to fit in this connection's maximum
packet size, the without flow control and without
reliability option is ignored. Flow control and reliability is
always applied to messages too big to fit in a packet.
The despatch layer parses out the in-reply-to and the
in-regards-to values from the opaque block of bytes and despatches them
to the appropriate application layer state machine, which parses out
the message type field, deserializes the message,
and despatches it to the appropriate fully typed event handler
of that state machine.
# It is remarkable how much stuff can be done without
concurrent communicating processes. Nostr is entirely
implemented over request reply, except that a whole lot
of requests and replies have an integer representing state,
where the state likely winds up being a database rowid.
The following discussion also applies if the reply-to field
or in-regards-to field is associated with a database index
rather than an instance of a class living in memory, and might
well be handled by an instance of a class containing only a database index.
# Representing concurrent communicating processes
node.js represents them as continuations. Rust tokio represents them
as something like continuations. Go represents them lightweight
threads, which is a far more natural and easier to use representation,
but under the hood they are something like continuations, and the abstraction
leaks a little. The abstraction leaks a little in the case you have one
concurrent process on one machine communicating with another concurrent
process on another machine.
Well, in C++, going to make instances of a class, that register
call backs, and the callback is the event. Which had an instance
of a class registered with the callback. Which in C++ is a pointer
to a method of an object, which has no end of syntax that no one
ever manages to get their head around.
So if `dog` is method pointer with the argument `bark`, just say
`std::invoke(dog, bark)` and let the compiler figure out how
to do it. `bark` is, of course, the data supplied by the message
and `dog` is the concurrent communicating process plus its registered
callback. And since the process is sequential, it knows the data
for the message that this is a reply to.
A message may contain a reply-to field and or an in-regards-to field.
In general, the in-regards-to field identifies the state machine
on the server and the client, and remains unchanged for the life
of the state machines. Therefore its handler function remains unchanged,
though it may do different things depending
on the state of the state machine and depending on the type of the message.
If the message only has an in-regards-to field, then the callback function for it
will normally be reginstered for the life of the councurrent process (instance)
If it is an in-reply-to, the dispatch mechanism will unregister the handler when it
dispatches the message. If you are going to receive multiple messages in response
to a single message, then you create a new instance.
In C, one represents actions of concurrent processes by a
C function that takes a callback function, so in C++,
a member function that takes a member function callback
(warning, scary and counter intuitive syntax).
Member to function pointers are a huge mess containing
one hundred workarounds, and the best workaround is to not use them.
People have a whole lot of ingenious ways to not use them, for example
a base class that passes its primary function call to one of many
derived classes. Which solution does not seem applicable to our
problem.
`std:invoke` is syntax sugar for calling weird and wonderful
callable things - it figures out the syntax for you at compile
time according to the type, and is strongly recommended, because
with the wide variety of C++ callable things, no one can stretch
their brain around the differing syntaxes.
The many, many, clever ways of not using member pointers
just do not cut it, for the return address on a message ultimately maps
to a function pointer, or something that is exactly equivalent to a function pointer.
Of course, we very frequently do not have any state, and you just
cannot have a member function to a static function. One way around
this problem is just to have one concurrent process whose state just
does not change, one concurrent process that cheerfully handles
messages from an unlimited number of correspondents, all using the same
`in-regards-to`, which may well be a well known named number, the functional
equivalent of a static web page. It is a concurrent process,
like all the others, and has its own data like all the others, but its
data does not change when it responds to a message, so never expects an
in-reply-to response, or if does, creates a dynamic instance of another
type to handle that. Because it does not remember what messages it sent
out, the in-reply-to field is no use to it.
Or, possibly our concurrent process, which is static and stateless
in memory, nonetheless keeps state in the database, in which case
it looks up the in-reply-to field in the database to find
the context. But a database lookup can hang a thread,
which we do not want to stall network facing threads.
So we have a single database handling thread that sequentially handles a queue
of messages from network facing threads driving network facing concurrent
processes, drives database facing concurrent processes,
which dispatch the result into a queue that is handled by
network facing threads that drive network facing concurrent
processes.
So, a single thread that handles the network card, despatching
message out from a queue in memory, and in from queue in memory, and does not
usually or routinely do memory allocation or release, or handles them itself
if they are standard, common, and known to be capable of being quickly handled,
a single thread that handles concurrent systems that are purely
memory to memory, but could involve dynamic allocation of memory,
and a single thread that handles concurrent state machines that do database
lookups and writes and possibly dynamic memory allocation, but do not
directly interact with the network, handing that task over to concurrent
state machines in the networking thread.
So a message comes in through the wire, where it is handled
by a concurrent process, probably a state machine with per connection
state, though it might have substates, child concurrent processes,
for reassembling one multipart message without hanging the next,
It then passes that message to a state machine in the application
layer, which is queued up in the queue for the thread or threads appropriate
to its destination concurrent process, and receives messages from those threads,
which it then despatches to the wire.
A concurrent process is of course created by another
concurrent process, so when it completes,
does a callback on the concurrent process that created it,
and any concurrent processes it has created
are abruptly discarded. So our external messages and events
involve a whole lot of purely internal messages and events.
And the event handler has to know what internal object this
message came from,
which for external messages is the in-regards-to field,
or is implicit in the in-reply-to field.
If you could be receiving events from different kinds of
objects about different matters, well, you have to have
different kinds of handlers. And usually you are only
receiving messages from only one such object, but in
irritatingly many special cases, several such objects.
But it does not make sense to write for the fully general case
when the fully general case is so uncommon, so we handle this
case ad-hoc by a special field, which is defined only for this
message type, not defined as a general quality of all messages.
It typically makes sense to assume we are handling only one kind
of message, possibly of variant type, from one object, and in
the other, special, cases, we address that case ad hoc by additional
message fields.
But if we support `std:variant`, there is a whole lot of overlap
between handling things by a new variant, and handling things
by a new callback member.
The recipient must have associated a handler, consisting of a
call back and an opaque pointer to the state of the concurrent process
on the recipient with the messages referenced by at least one of
these fields. In the event of conflicting values, the reply-to takes
precedence, but the callback of the reply-to has access to both its
data structure, and the in-regards-to dat structure, a pointer to which
is normally in its state. The in-regards-to being the state machine,
and the in-reply-to the event that modifies the
state of the state machine.
When we initialize a connection, we establish a state machine
at both ends, both the application factor of the state machine,
and the socket factor of the state machine.
When I say we are using state machines, this is just the
message handling event oriented architecture familiar in
programming graphical user interfaces.
Such a program consists of a pile of derived classes whose
base classes have all the machinery for handling messages.
Each instance of one of these classes is a state machine,
which contains member functions that are event handlers.
So when I say "state machine", I mean a class for handling
events like the many window classes in wxWidgets.
One big difference will be that we will be posting a lot of events
that we expect to trigger events back to us. And we will want
to know which posting the returning event came from. So we will
want to create some data that is associated with the fired event,
and when a resulting event is fired back to us, we can get the
correct associated data, because we might fire a lot of events,
and they might come back out of order. Gui code has this capability,
but it is rarely used.
## Implementing concurrent state machines in C++
Most of this is me re-inventing Asio, which is part of the
immense collection of packages of Msys2, Obviously I would be
better off integrating Asio than rebuilding it from the ground up
But I need to figure out what needs to be done, so that I can
find the equivalent Asio functionality.
Or maybe Asio is bad idea. Boost Asio was horribly broken.
I am seeing lots of cool hot projects using Tokio, not seeing any cool
hot projects use Asio.
If Bittorrent DHT library did their own
implementation of concurrent communicating processes,
maybe Asio is broken at the core
And for flow control, I am going to have to integrate Quic,
though I will have to fork it to change its security model
from certificate authorities to Zooko names. You can in theory
easily plug any kind of socket into Asio,
but I see a suspicious lack of people plugging Quic into it,
because Quic contains a huge amount of functionality that Asio
knows nothing of. But if I am forking it, can probably ignore
or discard most of that functionality.
Gui code is normally single threaded, because it is too much of
a bitch to lock an instance of a message handling class when one of
its member functions is handling a message (the re-entrancy problem).
However the message plumbing has to know which class is going
to handle the message (unless the message is being handled by a
stateless state machine, which it often is) so there is no reason
the message handling machinery could not atomically lock the class
before calling its member function, and free it on return from
its member function.
State machines (message handling classes, as for example
in a gui) are allocated in the heap and owned by the message
handling machinery. The base constructor of the object plugs it
into the message handling machinery. (Well, wxWidgets uses the
base constructor with virtual classes to plug it in, but likely the
curiously recurring template pattern would be saner
as in ATL and WTL.)
This means they have to be destroyed by sending a message to the message
handling machinery, which eventually results in
the destructor being called. The destructor does not have to worry
about cleanup in all the base classes, because the message handling
machinery is responsible for all that stuff.
Our event despatcher does not call a function pointer,
because our event handlers are member functions.
We call an object of type `std::function`. We could also use pointer to member,
which is more efficient.
All this complicated machinery is needed because we assume
our interaction is stateful. But suppose it is not. The requestreply
pattern, where the request contains all information that
determines the reply is very common, probably the great majority
of cases. This corresponds to an incoming message where the
inregardsto field and inreplyto field is empty,
because the incoming message initiates the conversation,
and its type and content suffices to determine the reply. Or the incoming message
causes the recipient to reply and also set up a state machine,
or a great big pile of state machines (instances of a message handling class),
which will handle the lengthy subsequent conversation,
which when it eventually ends results in those objects being destroyed,
while the connection continues to exist.
In the case of an incoming message of that type, it is despatched to
a fully re-entrant static function on the basis of its type.
The message handling machinery calls a function pointer,
not a class member.
We don't use, should not use, and cannot use, all the
message handling infrastructure that keeps track of state.
## receive a message with no inregardsto field, no inreplyto field
This is directed to a re-entrant function, not a functor,
because reentrant and stateless.
It is directed according to message type.
### A message initiating a conversation
It creates a state machine (instance of a message handling class)
sends the start event to the state machine, and the state machine
does whatever it does. The state machine records what message
caused it to be created, and for its first message,
uses it in the inreplyto field, and for subsequent messages,
for its inregardsto field,
### A request-reply message.
Which sends a message with the in-reply-to field set.
The recipient is expected to have a hash-map associating this field
with information as to what to do with the message.
#### A request-reply message where counterparty matters.
Suppose we want to read information about this entity from
the database, and then write that information. Counterparty
information is likely to be needed to be durable. Then we do
the read-modify write as a single sql statement,
and let the database serialize it.
## receive a message with no inregardsto field, but with an inreplyto field
The dispatch layer looks up a hash-map table of functors,
by the id of the field and id of the sender, and despatches the message to
that functor to do whatever it does.
When this is the last message expected inreplyto the functor
frees itself, removes itself from the hash-map. If a message
arrives with no entry in the table, it is silently dropped.
## receive a message with an inregardsto field, with or without an inreplyto to field.
Just as before, the dispatch table looks up the hash-map of state machines
(instances of message handling classes) and dispatches
the message to the stateful message handler, which figures out what
to do with it according to its internal state. What to do with an
inreplyto field, if there is one, is something the stateful
message handler will have to decide. It might have its own hashmap for
the inreplyto field, but this would result in state management and state
transition of huge complexity. The expected usage is it has a small
number of static fields in its state that reference a limited number
of recently sent messages, and if the incoming message is not one
of them, it treats it as an error. Typically the state machine is
only capable of handling the
response to its most recent message, and merely wants to be sure
that this *is* a response to its most recent message. But it could
have shot off half a dozen messages with the inregardsto field set,
and want to handle the response to each one differently.
Though likely such a scenario would be easier to handle by creating
half a dozen state machines, each handling its own conversation
separately. On the other hand, if it is only going to be a fixed
and finite set of conversations, it can put all ongoing state in
a fixed and finite set of fields, each of which tracks the most
recently sent message for which a reply is expected.
## A complex conversation.
We want to avoid complex conversations, and stick to the
requestreply pattern as much as possible. But this is apt to result
in the server putting a pile of opaque data (a cookie) its reply,
which it expects to have sent back with every request.
And these cookies can get rather large.
Bob decides to initiate a complex conversation with Carol.
He creates an instance of a state machine (instance of a message
handling class) and sends a message with no inregardsto field
and no inreplyto field but when he sends that initial message,
his state machine gets put in, and owned by,
the dispatch table according to the message id.
Carol, on receiving the message, also creates a state machine,
associated with that same message id, albeit the counterparty is
Bob, rather than Carol, which state machine then sends a reply to
that message with the inreplyto field set, and which therefore
Bob's dispatch layer dispatches to the appropriate state machine
(message handler)
And then it is back and forth between the two stateful message handlers
both associated with the same message id until they shut themselves down.
## factoring layers.
A layer is code containing state machines that receive messages
on one machine, and then send messages on to other code on
*the same machine*. The sockets layer is the code that receives
messages from the application layer, and then sends them on the wire,
and the code that receives messages from the wire,
and sends messages to the application layer.
But a single state machine at the application level could be
handling several connections, and a single connection could have
several state machines running independently, and the
socket code should not need to care.
We have a socket layer that receives messages containing an
opaque block of bytes, and then sends a message to
the application layer message despatch machinery, for whom the
block is not opaque, but rather identifies a message type
meaningful for the despatch layer, but meaningless for the socket layer.
The state machine terminates when its job is done,
freeing up any allocated memory,
but the connection endures for the life of the program,
and most of the data about a connection endures in
an sql database between reboots.
The connection is a long lived state machine running in
the sockets layer, which sends and receives what are to it opaque
blocks of bytes to and from the dispatch layer, and the dispatch
layer interprets these blocks of bytes as having information
(message type, inregardsto field and inreplyto field)
that enables it to despatch the message to a particular method
of a particular instance of a message handling class in C++,
corresponding to a particular channel in Go.
And these message handling classes are apt to be short lived,
being destroyed when their task is complete.
Because we can have many state machines on a connection,
most of our state machines can have very little state,
typically an infinite receive loop, an infinite send receive loop,
or an infinite receive send loop, which have no state at all,
are stateless. We factorize the state machine into many state machines
to keep each one manageable.
## Comparison with concurrent interacting processes in Go
These concurrent communicating processes are going to
be sending messages to each other on the same machine.
We need to model Go's goroutines.
A goroutine is a function, and functions always terminate --
and in Go are unceremoniously and abruptly ended when their parent
function ends, because they are variables inside its dataspace,
as are their channels.
And, in Go, a channel is typically passed by the parent to its children,
though they can also be passed in a channel.
Obviously this structure is impossible and inapplicable
when processes may live, and usually do live,
in different machines.
The equivalent of Go channel is not a connection. Rather,
one sends a message to the other to request it create a state machine,
which will correspond to the in-regards-to message, and the equivalent of a
Go channel is a message type, the in-regards-to message id,
and the connection id. Which we pack into a single class so that we
can use it the way Go uses channels.
The sockets layer (or another state machine on the application layer)
calls the callback routine with the message and the state.
The sockets layer treats the application layer as one big state
machine, and the information it sends up to the application
enables the application layer to despatch the event to the
correct factor of that state machine, which we have factored into
as many very small, and preferably stateless, state machines as possible.
We factor the potentially ginormous state machine into
many small state machines (message handling classes),
in the same style as Go factors a potentially ginormous Goroutine into
many small goroutines.
The socket code being a state machine composed of many
small state machines, which communicates with the application layer
over a very small number of channels,
these channels containing blocks of bytes that are
opaque to the socket code,
but are serialized and deserialized by the application layer code.
From the point of view of the application layer code,
it is many state machines,
and the socket layer is one big state machine.
From the point of view of the socket code, it is many state machines,
and the application layer is one big state machine.
The application code, parsing the the in-reply-to message id,
and the in-regard-to message id, figures out where to send
the opaque block of bytes, and the recipient deserializes,
and sends it to a routine that acts on an object of that
deserialized class.
Since the sockets layer does not know the internals
of the message struct, the message has to be serialized and deserialized
into the corresponding class by the dispatch layer,
and thence to the application layer.
Go code tends to consist of many concurrent processes
continually being spawned by a master concurrent process,
and themselves spawning more concurrent processes.
# flow control and reliability
If we want to transmit a big pile of data, a big message, well,
this is the hard problem, for the sender has to throttle according
to the recipient's readiness to handle it and the physical connections capability to transmit it.
Quic is a UDP protocol that provides flow control, and the obvious thing
to handle bulk data transfer is to fork it to use Zooko based keys.
[Tailscale]:https://tailscale.com/blog/how-nat-traversal-works
"How to communicate peer-to-peer through NAT firewalls"{target="_blank"}
[Tailscale] has solved a problem very similar to the one I am trying to solve,
albeit their solutions rely on a central human authority,
which authority they ask for money and they recommend:
> If youre reaching for TCP because you want a
> streamoriented connection when the NAT traversal is done,
> consider using QUIC instead. It builds on top of UDP,
> so we can focus on UDP for NAT traversal and still have a
> nice stream protocol at the end.
But to interface QUIC to a system capable of handling a massive
number of state machines, going to need something like Tokio,
because we want the thread to service other state machines while
QUIC is stalling the output or waiting for input. Indeed, no
matter what, if we stall in the socket layer rather than the
application layer, which makes life a whole lot easier for the
application programmer, going to need something like Tokio.
Or we could open up Quic, which we have to do anyway
to get it to use our keys rather than enemy controlled keys,
and plug it into our C++ message passing layer.
On the application side, we have to lock each state machine
when it is active. It can only handle one message at at time.
So the despatch layer has to queue up messages and stash them somewhere,
and if it has too many messages stashed,
it wants to push back on the state machine at the application layer
at the other end of the wire. So the despatch layer at the receiving end
has to from time to time tell the despatch layer at the sending end
"I have `n` bytes in regard to message 'Y', and can receive `m` more.
And when the despatch layer at the other end, which unlike the socket
layer knows which state machine is communicating with which,
has more than that amount of data to send, it then blocks
and locks the state machine at its end in its send operation.
The socket layer does not know about that and does not worry about that.
What it worries about packets getting lost on the wire, and caches
piling up in the middle of the wire.
It adds to each message a send time and a receive time
and if the despatch layer wants to send data faster
than it thinks is suitable, it has to push back on the despatch layer.
Which it does in the same style.
It tells it the connection can handle up to `m` further bytes.
Or we might have two despatch layers, one for sending and one for
receiving, with the send state machine sending events to the receive state
machine, but not vice versa, in which case the socket layer
*can* block the send layer.
# Tokio
Most of this machinery seems like a re-implementation of Tokio-rust,
which is a huge project. I don't wanna learn Tokio-rust, but equally
I don't want to re-invent the wheel.
Or perhaps we could just integrate QUICs internal message
passing infrastructure to our message passing infrastructure.
It probably already supports a message passing interface.
Instead of synchronously writing data, you send a message to it
to write some data, and hen it is done, it calls a callback.
# Minimal system
Prototype. Limit global bandwidth at the application
state machine level -- they adjust their policy according to how much
data is moving, and they spread the non response outgoing
messages out to a constant rate (constant per counterparty,
and uniformly interleaved.)
Single threaded, hence no state machine locking.
Tweet style limit on the size of messages, hence no fragmentation
and re-assembly issue. Our socket layer becomes trivial - it just
send blobs like a zeromq socket.
If you are trying to download a sackload of data, you request a counterparty to send a certain amount to you at a given rate, he immediately responds (without regard to global bandwidth limits) with the first instalment, and a promise of further instalments at a certain time)
Each instalment records how much has been sent, and when, when the next instalment is coming, and the schedule for further instalments.
If you miss an instalment, you nack it after a delay. If he receives
a nack, he replaces the promised instalments with the missing ones.
The first thing we implement is everyone sharing a list of who they have successfully connected to, in recency order, and everyone keeps everyone else's list, which catastrophically fails to scale, and also how up to date their counter parties are with their own list, so that they do not have
endlessly resend data (unless the counterparty has a catastrophic loss of data, and requests everything from the beginning.)
We assume everyone has an open port, which is sucks intolerably, but once that is working we can handle ports behind firewalls, because we are doing UDP. Knowing who the other guy is connected to, and you are not, you can ask him to initiate a peer connection for the two of you, until you have
enough connections that the keep alive works.
And once everyone can connect to everyone else by their public username, then we can implement bitmessage.

View File

@ -1,20 +1,20 @@
---
title:
proof of share
sidebar: true
notmine: false
...
::: {style="background-color : #ffdddd; font-size:120%"}
![run!](tealdeer.gif)[TL;DR Map a blockdag algorithm equivalent to the
Generalized MultiPaxos Byzantine
protocol to the corporate form:]{style="font-size:150%"}
::: myabstract
[abstract:]{.bigbold}
Map a blockdag algorithm to the corporate form.
The proof of share crypto currency will work like
shares. Crypto wallets, or the humans controlling the wallets,
correspond to shareholders.
Peer computers in good standing on the blockchain, or the humans
controlling them, correspond to company directors.
CEO.
:::
# the problem to be solved
We need proof of share because our state regulated system of notaries,
bankers, accountants, and lawyers has gone off the rails, and because
proof of work means that a tiny handful of people who are [burning a
@ -50,11 +50,12 @@ that in substantial part, it made such behavior compulsory.  Which is
why Gab is now doing an Initial Coin Offering (ICO) instead of an
Initial Public Offering (IPO).
[Sarbanes-Oxley]:sox_accounting.html
[Sarbanes-Oxley]:../manifesto/sox_accounting.html
"Sarbanes-Oxley accounting"
{target="_blank"}
Because current blockchains are proof of work, rather than proof of
stake, they give coin holders no power. Thus an initial coin offering
shares, they give coin holders no power. Thus an initial coin offering
(ICO) is not a promise of general authority over the assets of the
proposed company, but a promise of future goods or services that will be
provided by the company. A proof of share ICO could function as a more
@ -113,6 +114,10 @@ lawyers each drawing \$300 per hour, increasingly impractical.
Proof of work is a terrible idea, and is failing disastrously, but we
need to replace it with something better than bankers and government.
Proof of stake, as implemented in practice, is merely a
central bank digital currency with the consensus determined by a small
secretive insider group (hello Ethereum).
The gig economy represents the collapse of the corporate form under the
burden of HR and accounting.
@ -120,6 +125,96 @@ The initial coin offering (in place of the initial public offering)
represents an attempt to recreate the corporate form using crypto
currency, to which existing crypto currencies are not well adapted.
# How proof of share works
One way out of this is proof of share, plus evidence of good
connectivity, bandwidth, and disk speed. You have a crypto currency
that works like shares in a startup. Peers have a weight in
the consensus, a likelihood of their view of the past becoming the
consensus view, that is proportional to the amount of
crypto currency their client wallets possessed at a certain block height,
$\lfloor(h1000)/4096\rfloor4096$, where $h$ is the current block height,
provided they maintain adequate data, disk access,
and connectivity. The trouble with this is that it reveals
what machines know where the whales are, and those machines
could be raided, and then the whales raided, so we have to have
a mechanism that can hide the ips of whales delegating weight
in the consensus to peers from the peers exercising that weight
in the consensus. And [in fact I intend to do that mechanism
before any crypto currency, because bitmessage is abandonware
and needs to be replaced](file:///C:/Users/john/src/reactionId/wallet/docs/manifesto/social_networking.html#monetization){target="_blank"}.
Plus the peers consense over time on a signature that
represents human board, which nominates another signature that represents
a human ceo, thus instead of informal secret centralisation with
the capability to do unknown and possibly illegitimate things
(hello Ethereum), you have a known formal centralisation with
the capability to do known and legitimate things. Dictating
the consensus and thus rewriting the past not being one of those
legitimate things.
## consensus algorithm details
Each peer gets a weight at each block height that is a
deterministically random function of the block height,
its public key, the hash of the previous block that it is building its block
on top of, and the amount of crypto currency (shares)
that it represents, with the likelihood of it getting a high weight
proportional to the amount of crypto currency it represents, such
that the likelihood of a peer having a decisive vote is proportional
to the amount of share it represents.
Each peer sees the weight of a proposed block as
the median weight of the three highest weighted peers
that it knows know or knew of the block and its contents according to
their current weight at this block height and perceived it has highest
weighted at the time they synchronized on it, plus the weight of
the median weighted peer among up to three peers
that were recorded by the proposer
as knowing to the previous block that the proposed block
is being built on at the previous block height, plus the
weight of the chain of preceding blocks similarly.
When it synchronizes with another peer on a block, and the block is
at that time the highest weighted block proposed block known to both
of them,
both record the other's signature as knowing that block
as the highest weighted known at that time. If one of them
knows of a higher weighted proposed block, then they
synchronize on whichever block will be the highest weighted block.
when both have synchronized on it.
if it has a record of less than three knowing that block,
or if the other has a higher weight than one of the three,
then they also synchronize their knowledge of the highest weighted three.
This algorithm favors peers that represent a lot of shares, and also
favors peers with good bandwidth and data access, and peers that
are responsive to other peers, since they have more and better connections
thus their proposed block is likely become widely known faster.
If only one peer, the proposer, knows of a block, then its weight is
the weight of the proposer, plus previous blocks but is lower than
the weight if any alternative block whose previous blocks have the
same weight, but two proposers.
This rule biases the consensus to peers with good connection and
good bandwidth to other good peers.
If comparing two proposed blocks, each of them known to two proposers, that
have chains of previous blocks that are the same weight, then the weight
is the lowest weighted of the two, but lower than any block known to
three. If known to three, the median weight of the three. If known to
a hundred, only the top three matter and only the top three are shared
around.
It is thus inherently sybil proof, since if one machine is running
a thousand sybils, each sybil has one thousandth the share
representation, one thousandth the connectivity, one thousandth
the random disk access, and one thousandth the cpu.
# Collapse of the corporate form
The corporate form is collapsing in part because of [Sarbanes-Oxley],
which gives accountants far too much power, and those responsible for
deliverables and closing deals far too little, and in part because HR
@ -194,7 +289,7 @@ intent was for buying drugs, buying guns, violating copyright, money
laundering, and capital flight.
These are all important and we need to support them all, especially
violating copyright, capital flight and buying guns under repressive
violating copyright, capital flight, and buying guns under repressive
regimes.  But we now see big demand for crypto currencies to support a
replacement for Charles the Seconds corporate form, which is being
destroyed by HR, and to restore double entry accounting, which is being
@ -356,12 +451,12 @@ be controlled by private keys known only to client wallets, but most
transactions or transaction outputs shall be registered with one
specific peer.  The blockchain will record a peers uptime, its
provision of storage and bandwidth to the blockchain, and the amount of
stake registered with a peer.  To be a peer in good standing, a peer has
shares registered with a peer.  To be a peer in good standing, a peer has
to have a certain amount of uptime, supply a certain amount of bandwidth
and storage to the blockchain, and have a certain amount of stake
and storage to the blockchain, and have a certain amount of shares
registered to it.  Anything it signed as being in accordance with the
rules of the blockchain must have been in accordance with the rules of
the blockchain.  Thus client wallets that control large amounts of stake
the blockchain.  Thus client wallets that control large amounts of shares
vote which peers matter, peers vote which peer is primus inter pares,
and the primus inter pares settles double spending conflicts and
suchlike.
@ -405,7 +500,7 @@ protocol where they share transactions around.
During gossip, they also share opinions on the total past of the blockchain.
If each peer tries to support past consensus, tries to support the opinion of
what looks like it might be the majority of peers by stake that it sees in
what looks like it might be the majority of peers by shares that it sees in
past gossip events, then we get rapid convergence to a single view of the
less recent past, though each peer initially has its own view of the very
recent past.
@ -593,20 +688,20 @@ network, we need the one third plus one to reliably verify that there
is no other one third plus one, by sampling geographically distant
and network address distant groups of nodes.
So, we have fifty percent by weight of stake plus one determining policy,
So, we have fifty percent by weight of shares plus one determining policy,
and one third of active peers on the network that have been nominated by
fifty percent plus one of weight of stake to give effect to policy
fifty percent plus one of weight of shares to give effect to policy
selecting particular blocks, which become official when fifty percent plus
one of active peers the network that have been nominated by fifty percent
plus one of weight of stake have acked the outcome selected by one third
plus one of weight of shares have acked the outcome selected by one third
plus one of active peers.
In the rare case where half the active peers see timeouts from the other
half of the active peers, and vice versa, we could get two blocks, each
endorsed by one third of the active peers, which case would need to be
resolved by a fifty one percent vote of weight of stake voting for the
resolved by a fifty one percent vote of weight of shares voting for the
acceptable outcome that is endorsed by the largest group of active peers,
but the normal outcome is that half the weight of stake receives
but the normal outcome is that half the weight of shares receives
notification (the host representing them receives notification) of one
final block selected by one third of the active peers on the network,
without receiving notification of a different final block.

View File

@ -145,4 +145,4 @@ worth, probably several screens.
- [How to do VPNs right](how_to_do_VPNs.html)
- [How to prevent malware](safe_operating_system.html)
- [The cypherpunk program](cypherpunk_program.html)
- [Replacing TCP and UDP](names/TCP.html)
- [Replacing TCP and UDP](design/TCP.html)

View File

@ -1,5 +1,7 @@
---
title: Libraries
sidebar: true
notmine: false
...
A review of potentially useful libraries and utilities.
@ -124,6 +126,47 @@ does not represent the good stuff.
# Peer to Peer
## Freenet
Freenet has long intended to be, and perhaps is, the social
application that you have long intended to write,
and has an enormous coldstart advantage over anything you could write,
no matter how great.
It also relies on udp, to enable hole punching, and routinely does hole punching.
So the only way to go, to compete, is to write a better freenet within
freenet.
One big difference is that I think that we want to go after the visible net,
where network addresses are associated with public keys - that the backbone should be
ips that have a well known and stable relationship to public keys.
Which backbone transports encrypted information authored by people whose
public key is well known, but the network address associated with that
public key cannot easily be found.
Freenet, by design, chronically loses data. We need reliable backup,
paid for in services or crypto currency.
filecoin provides this, but is useless for frequent small incremental
backups.
## Bittorrent DHT library
This is a general purpose library, not all that married to bittorrent
It is available of as an MSYS2 library , MSYS2 being a fork of
the semi abandoned mingw libary, with the result that the name of the
very dead project Mingw-w64 is all over it.
Its pacman name is mingw-w64-dht, but it has repos all over the plac under its own name
It is async, driven by being called on a timer, and called when
data arrives. It contains a simple example program, that enables you to publish any data you like.
## libp2p
[p2p]:https://github.com/elenaf9/p2p
{target="_blank"}
@ -324,6 +367,17 @@ Of course missing from this from Jim's long list of plans are DDoS protection, a
The net is vast and deep. Maybe we need to start cobbling these pieces together. The era of centralized censorship needs to end. Musk will likely lose either way, and he's only one man against the might of so many paper tigers that happen to be winning the information war.
## Lightning node
[`rust-lightning`]:https://github.com/lightningdevkit/rust-lightning
{target="_blank"}
[`rust-lightning`] is a general purpose library for writing lightning nodes, running under Tokio, that is used in one actual lightning node implementation.
It is intended to be integrated into on-chain wallets.
It provides the channel state as "a binary blob that you can store any way you want" -- which is to say, ready to be backed up onto the social net.
# Consensus
@ -785,7 +839,17 @@ libraries, but I hear it cursed as a complex mess, and no one wants to
get into it. They find the far from easy `cmake` easier. And `cmake`
runs on all systems, while autotools only runs on linux.
I believe `cmake` has a straightforward pipeline into `*.deb` files, but if it has, the autotools pipleline is far more common and widely used.
MSYS2, which runs on Windows, supports autotools. So, maybe it does run
on windows.
[autotools documentation]:https://thoughtbot.com/blog/the-magic-behind-configure-make-make-install
{target="_blank"}
Despite the complaints about autotools, there is [autotools documentation]
on the web that does not make it sound too bad.
I believe `cmake` has a straightforward pipeline into `*.deb` files,
but if it has, the autotools pipleline is far more common and widely used.
## The standard windows installer
@ -818,6 +882,8 @@ NSIS can create msi files for windows, and is open source.
[NSIS Open Source repository]
NSIS is also available as an MSYS package
People who know what they are doing seem to use this open
source install system, and they write nice installs with it.
@ -1138,7 +1204,7 @@ which could receive a packet at any time. I need to look at the
GameNetworkingSockets code and see how it listens on lots and lots of
sockets. If it uses [overlapped IO], then it is golden. Get it up first, and it put inside a service later.
[Overlapped IO]:server.html#the-select-problem
[Overlapped IO]:design/server.html#the-select-problem
{target="_blank"}
The nearest equivalent Rust application gave up on congestion control, having programmed themselves into a blind alley.
@ -1357,7 +1423,7 @@ transaction affecting the payee factor state. A transaction has no immediate
affect. The payer mutable substate changes in a way reflecting the
transaction block at the next block boundary. And that change then has
effect on product mutable state at a subsequent product state block
boundary, changing the stake possessed by the substate.
boundary, changing the shares possessed by the substate.
Which then has effect on the payee mutable substate at its next
block boundary when the payee substate links back to the previous
@ -1567,6 +1633,12 @@ You can create a pool of threads processing connection handlers (and waiting
for finalizing database connection), by running `io_service::run()` from
multiple threads. See Boost.Asio docs.
## Asio
I tried boost asio, and concluded it was broken, trying to do stuff that cannot be done,
and hide stuff that cannot be hidden in abstractions that leak horribly.
But Asio by itself (comes with MSYS2) might work.
## Asynch Database access
MySQL 5.7 supports [X Plugin / X Protocol, which allows asynchronous query execution and NoSQL But X devapi was created to support node.js and stuff. The basic idea is that you send text messages to mysql on a certain port, and asynchronously get text messages back, in google protobuffs, in php, JavaScript, or sql. No one has bothered to create a C++ wrapper for this, it being primarily designed for php or node.js](https://dev.mysql.com/doc/refman/5.7/en/document-store-setting-up.html)
@ -1791,7 +1863,7 @@ Javascript is a great language, and has a vast ecosystem of tools, but
it is controlled from top to bottom by our enemies, and using it is
inherently insecure.
It consists of a string (which is implemented under the hood as a copy on
Tcl consists of a string (which is implemented under the hood as a copy on
write rope, with some substrings of the rope actually being run time typed
C++ types that can be serialized and deserialized to strings) and a name
table, one name table per interpreter, and at least one interpreter per
@ -1869,14 +1941,50 @@ from wide character variants and locale variants. (We don't want locale
variants, they are obsolete. The whole world is switching to UTF, but
our software and operating environments lag)
Locales still matter in case insensitive compare, collation order,
canonicalization of utf-8 strings, and a rats nest of issues,
which linux and sqlite avoids by doing binary compares, and if it cannot
avoid capitalization issues, only considering A-Z to be capitals.
If you tell sqlite to incorporate the ICU library, sqlite will attempt to
do case lowering and collation for all of utf-8 - which strikes me
as something that cannot really be done, and I am not at all sure how
it will interact with wxWidgets attempting to do the same thing.
What happens is that operations become locale dependent. It will
have a different view of what characters are equivalent in different
places. And changing the locale on a database will break an index or
table that has a non binary collation order. Which probably will not
matter much because we are likely to have few entries that only differ
in capitalization. The sql results will be wrong, but the database will
not crash, and when we have a lot of entires that affected by non latin
capitalization rules, it is probably going to be viewed only in that
locale. But any collation order that is global to all parties on the blockchain
has to be latin or binary.
wxWidgets does *not* include the full unicode library, so cannot do this
stuff. But sqlite provides some C string functions that are guaranteed to
do whatever it does, and if you include the ICU library it attempts
to handle capitalization on the entire unicode set\
`int sqlite3_stricmp(const char *, const char *);`\
`sqlite3_strlike(P,X,E)`\
The ICU library also provides a real regex function on unicode
(`sqlite3_strlike` being the C equivalent of the SQL `LIKE`,
providing a rather truncated fragment of regex capability)
Pretty sure the wxWidgets regex does something unwanted on unicode
`wString::ToUTF8()` and `wString::FromUTF8()` do what you would expect.
`wxString::c_str()` does something too clever by half.
On visual studio, need to set your source files to have bom, so that Visual
Studio knows that they are UTF8, need to set the compiler environment in
Visual Studio to UTF8 with `/Zc:__cplusplus /utf-8 %(AdditionalOptions)`
And you need to set the run time environment of the program to UTF8
with a manifest.
with a manifest. Not at all sure how codelite will handle manifests,
but there is a codelite build that does handle utf-8, presumably with
a manifest. Does not do it in the standard build on windows.
You will need to place all UTF8 string literals and string constants in a
resource file, which you will use for translated versions.
@ -1907,9 +2015,6 @@ way of the future, but not the way of the present. Still a work in progress
Does not build under Windows. Windows now provide UTF8 entries to all
its system functions, which should make it easy.
wxWidgets provides `wxRegEx` which, because wxWidgets provides index
by entity, should just work. Eventually. Maybe the next release.
# [UTF8-CPP](http://utfcpp.sourceforge.net/ "UTF-8 with C++ in a Portable Way")
A powerful library for handling UTF8. This somewhat duplicates the

View File

@ -437,11 +437,23 @@ A class can be explicitly defined to take aggregate initialization
}
}
but that does not make it of aggregate type. Aggregate type has *no*
constructors except default and deleted constructors
but that does not make it of aggregate type.
Aggregate type has *no* constructors
except default and deleted constructors
# functional programming
A lambda is a nameless value of a nameless class that is a
functor, which is to say, has `operator()` defined.
But, of course you can get the class with `decltype`
and assign that nameless value to an `auto` variable,
or stash it on the heap with `new`,
or in preallocated memory with placement `new`
But if you are doing all that, might as well explicitly define a
named functor class.
To construct a lambda in the heap:
auto p = new auto([a,b,c](){})
@ -475,8 +487,8 @@ going to have to introduce a compile time name, easier to do it as an
old fashioned function, method, or functor, as a method of a class that
is very possibly pod.
If we are sticking a lambda around to be called later, might copy it by
value into a templated class, or might put it on the heap.
If we are sticking a lambda around to be called later, might copy
it by value into a templated class, or might put it on the heap.
auto bar = []() {return 5;};
@ -522,7 +534,7 @@ lambdas and functors, but are slow because of dynamic allocation
C++ does not play well with functional programming. Most of the time you
can do what you want with lambdas and functors, using a pod class that
defines operator(\...)
defines `operator(...)`
# auto and decltype(variable)

View File

@ -1,7 +1,8 @@
<div class="button-bar">
<a href="./manifesto/vision.html">vision</a>
<a href="./manifesto/scalability.html">scalability</a>
<a href="./manifesto/social_networking.html">social networking</a>
<a href="./manifesto/Revelation.html">revelation</a>
<a href="../manifesto/vision.html">vision</a>
<a href="../manifesto/scalability.html">scalability</a>
<a href="../manifesto/social_networking.html">social networking</a>
<a href="../manifesto/Revelation.html">revelation</a>
</div>

View File

@ -395,7 +395,7 @@ give the reliable broadcast channel any substantial information about the
amount of the transaction, and who the parties to the transaction are, but the
node of the channel sees IP addresses, and this could frequently be used to
reconstruct a pretty good guess about who is transacting with whom and why.
As we see with Monaro, a partial information leak can be put together with
As we see with Monero, a partial information leak can be put together with
lots of other sources of information to reconstruct a very large information
leak.

View File

@ -38,7 +38,7 @@ text-align:center;">May Scale of monetary hardness </td>
</tr>
<tr>
<td class="center"><b>3</b></td>
<td>Major crypto currencies, such as Bitcoin and Monaro</td>
<td>Major crypto currencies, such as Bitcoin and Monero</td>
</tr>
<tr>
<td class="center"><b>4</b></td>

View File

@ -8,7 +8,7 @@ We need blockchain crypto currency supporting pseudonymous reputations and end t
We also need one whose consensus protocol is more resistant to government leverage. Ethereum is alarmingly vulnerable to pressure from our enemies.
The trouble with all existing blockchain based currencies is that the metadata relating to the transaction is transmitted by some other, ad hoc, mechanism, usually highly insecure, and this metadata necessarily links transaction outputs to identities, albeit in Monaro it only links a single transaction output, rather than a network of transactions.
The trouble with all existing blockchain based currencies is that the metadata relating to the transaction is transmitted by some other, ad hoc, mechanism, usually highly insecure, and this metadata necessarily links transaction outputs to identities, albeit in Monero it only links a single transaction output, rather than a network of transactions.
Thus we need a pseudonymous, not an anonymous, crypto currency.

View File

@ -4,11 +4,11 @@ title: Crypto currency
This discussion is obsoleted and outdated by the latest advances in recursive snarks.
The objective is to implement the blockchain in a way that scales to one hundred thousand transactions per second, so that it can replace the dollar, while being less centralized than bitcoin currently is, though not as decentralized as purists would like, and preserving privacy better than bitcoin now does, though not as well as Monaro does. It is a bitcoin with minor fixes to privacy and centralization, major fixes to client host trust, and major fixes to scaling.
The objective is to implement the blockchain in a way that scales to one hundred thousand transactions per second, so that it can replace the dollar, while being less centralized than bitcoin currently is, though not as decentralized as purists would like, and preserving privacy better than bitcoin now does, though not as well as Monero does. It is a bitcoin with minor fixes to privacy and centralization, major fixes to client host trust, and major fixes to scaling.
The problem of bitcoin clients getting scammed by bitcoin peers will be fixed through Merkle-patricia, which is a a well known and already widely deployed fix though people keep getting scammed due to lack of a planned bitcoin client-host architecture. Bitcoin was never designed to be client host, but it just tends to happen, usually in a way that quite unnecessarily violates privacy, client control, and client safety.
Monaros brilliant and ingenious cryptography makes scaling harder, and all mining based blockchains tend to the same centralization problem as afflicts bitcoin. Getting decisions quickly about a big pile of data necessarily involves a fair bit of centralization, but the Paxos proof of share protocol means the center can move at the speed of light in fiber, and from time to time, will do so, sometimes to locations unknown and not easy to find. We cannot avoid having a center, but we can make the center ephemeral, and we can make it so that not everyone, or even all peers, know the network address of the processes holding the secrets that signed the most recent block.
Moneros brilliant and ingenious cryptography makes scaling harder, and all mining based blockchains tend to the same centralization problem as afflicts bitcoin. Getting decisions quickly about a big pile of data necessarily involves a fair bit of centralization, but the Paxos proof of share protocol means the center can move at the speed of light in fiber, and from time to time, will do so, sometimes to locations unknown and not easy to find. We cannot avoid having a center, but we can make the center ephemeral, and we can make it so that not everyone, or even all peers, know the network address of the processes holding the secrets that signed the most recent block.
Scaling accomplished by a client host hierarchy, where each host has many clients, and each host is a blockchain peer.
@ -16,11 +16,19 @@ A hundred or so big peers, who do not trust each other, each manage a copy of th
The latest block is signed by peers representing a majority of the shares, which is likely to be considerably less than a hundred or so peers.
Peer share is delegated from clients probably a small minority of big clients not all clients will delegate. Delegation makes privacy more complicated and leakier. Delegations will be infrequent you can delegate the stake held by an offline cold wallet, whose secret lives in pencil on paper in a cardboard file in a safe, but a peer to which the stake was delegated has to have its secret on line.
Peer share is delegated from clients - probably a small minority of big clients -
not all clients will delegate. Delegation makes privacy more complicated and leakier.
Delegations will be infrequent - you can delegate the shares held by an offline cold wallet,
whose secret lives in pencil on paper in a cardboard file in a safe,
but a peer to which the shares were delegated has to have its secret on line.
Each peers copy of the blockchain is managed, within a rack on the premises of a peer, by a hundred or so shards. The shards trust each other, but that trust does not extend outside the rack, which is probably in a room with a lock on the door and a security camera watching the rack.
Most people transacting on the blockchain are clients of a peer. The blockchain is in the form of a sharded Merkle-patricia tree, hence the clients do not have to trust their host they can verify any small fact about the blockchain in that they can verify that peers reflecting a majority of stake assert that so and so is true, and each client can verify that the peers have not rewritten the past.
Most people transacting on the blockchain are clients of a peer. The blockchain
is in the form of a sharded Merkle-patricia tree, hence the clients do not
have to trust their host they can verify any small fact about the blockchain in
that they can verify that peers reflecting a majority of shares assert that
so and so is true, and each client can verify that the peers have not rewritten the past.
Scale is achieved through the client peer hierarchy, and, within each peer, by sharding the blockchain.
@ -28,15 +36,15 @@ Clients verify those transactions that concern them, but cannot verify that all
In each transaction, each client verifies that the other client is seeing the same history and recent state of the blockchain, and in this sense, the blockchain is a consensus of all clients, albeit that consensus is mediated through a small number of large entities that have a lot of power.
The architecture of power is rather like a corporation, with stake as shares.
The architecture of power is rather like a corporation.
In a corporation CEO can do anything, except the board can fire him and
choose a new CEO at any time. The shareholders could in theory fire the
board at any time, but in practice, if less than happy with the board, have
to act by transacting through a small number of big shareholders.
Centralization is inevitable, but in practice, by and large corporations do
an adequate job of pursuing shareholder interests, and when they fail to do
so, as with woke capital, Star Wars, or the great minority mortgage
meltdown, it is usually due to heavy handed state intervention. Googles
so, as with woke capital, Star Wars, or the Great Minority Mortgage
Meltdown, it is usually due to heavy handed state intervention. Googles
board is mighty woke, but in the Damore affair, human resources decided
that they were not woke enough, and in the Soy wars debacle, the board
was not woke at all but gave power over Star Wars brand name to wome
@ -46,9 +54,25 @@ have tried. Delegated power representing assets, rather than people, results
in centralized power that, by and large, mostly, pursues the interests of
those assets. Delegated power representing people, not so much.
In bitcoin, power is in the hands of a very small number of very large miners. This is a problem, both in concentration of power, which seems difficult to avoid if making decisions rapidly about very large amounts of data, and in that miner interests differ from stakeholder interests. Miners consume very large amounts of power, so have fixed locations vulnerable to state power. They have generally relocated to places outside the US hegemony, into the Chinese or Russian hegemonies, or the periphery of those hegemonies, but this is not a whole lot of security.
In bitcoin, power is in the hands of a very small number of very large miners.
This is a problem, both in concentration of power, which seems difficult to
avoid if making decisions rapidly about very large amounts of data,
and in that miner interests differ from shareholder interests. Miners
consume very large amounts of power, so have fixed locations vulnerable to state power.
They have generally relocated to places outside the US hegemony,
into the Chinese or Russian hegemonies, or the periphery of those hegemonies,
but this is not a whole lot of security.
proof of share has the advantage that stake is ultimately knowledge of secret keys, and while the state could find the peers representing a majority of stake, they are more mobile than miners, and the state cannot easily find the clients that have delegated stake to one peer, and could easily delegate it to a different peer, the underlying secret likely being offline on pencil and paper in someones safe, and hard to figure out whose safe.
"Proof of stake" was sold as the whales, rather than the miners, controlling the currency, but as implemented by Ether, this is not what happened, rather a secretive cabal controls the currency, so the phrase is damaged. It is used to refer to a very wicked system.
So I will use the term "Proof of Share" to mean the whales actually controlling the currency.
Proof of share has the advantage that shares are ultimately knowledge of secret keys,
and while the state could find the peers representing a majority of shares,
they are more mobile than miners, and the state cannot easily find the clients
that have delegated shares to one peer, and could easily delegate it to a different peer,
the underlying secret likely being offline on pencil and paper in someones safe,
and hard to figure out whose safe.
Obviously, at full scale we are always going to have immensely more clients than full peers, likely by a factor of hundreds of thousands, but we need to have enough peers, which means we need to reward peers for being peers, for providing the service of storing blockchain data, propagating transactions, verifying the blockchain, and making the data readily available, rather than for the current pointless bit crunching and waste of electricity employed by current mining.
@ -58,12 +82,12 @@ The power over the blockchain, and the revenues coming from transaction and stor
Also, at scale, we are going to have to shard, so that a peer is actually a pool of machines, each with a shard of the blockchain, perhaps with all the machines run by one person, perhaps run by a group of people who trust each other, each of whom runs one machine managing one shard of the blockchain.
Rewards, and the decision as to which chain is final, has to go to weight of stake, but also to proof of service to peers, who store and check the blockchain and make it available. For the two to be connected, the peers have to get stake delegated to them by providing services to clients.
Rewards, and the decision as to which chain is final, has to go to weight of shares, but also to proof of service to peers, who store and check the blockchain and make it available. For the two to be connected, the peers have to get shares delegated to them by providing services to clients.
All durable keys should live in client wallets, because they can be secured off the internet.  So how do we implement weight of stake, since only peers are sufficiently well connected to actually participate in governance?
All durable keys should live in client wallets, because they can be secured off the internet.  So how do we implement weight of shares, since only peers are sufficiently well connected to actually participate in governance?
To solve this problem, stakes are held by client wallets.  Stakes that are in the clear get registered with a peer, the registration gets recorded in the blockchain, and the peer gets influence, and to some
extent rewards, proportional to the stake registered with it, conditional on the part it is doing to supply data storage, verification, and bandwidth.
To solve this problem, shares are held by client wallets.  Shares that are in the clear get registered with a peer, the registration gets recorded in the blockchain, and the peer gets influence, and to some
extent rewards, proportional to the shares registered with it, conditional on the part it is doing to supply data storage, verification, and bandwidth.
My original plan was to produce a better bitcoin from pair based
cryptography.  But pair based cryptography is slow.  Peers would need a

View File

@ -87,6 +87,7 @@ transaction outputs that you are spending.
Which you do by proving that the inputs are part
of the merkle tree of unspent transaction outputs,
of which the current root of the blockchain is the root hash.
## structs
A struct is simply some binary data laid out in well known and agreed format.

View File

@ -455,185 +455,9 @@ way hash, so are not easily linked to who is posting in the feed.
### Replacing Kademlia
[social distance metric]:recognizing_categories_and_instances.html#Kademlia
{target="_blank"}
This design deleted, because its scaling properties turned out to be unexpectedly bad.
I will describe the Kademlia distributed hash table algorithm not in the
way that it is normally described and defined, but in such a way that we
can easily replace its metric by [social distance metric], assuming that we
can construct a suitable metric, which reflects what feeds a given host is
following, and what network addresses it knows and the feeds they are
following, a quantity over which a distance can be found that reflects how
close a peer is to an unstable network address, or knows a peer that is
likely to know a peer that is likely to know an unstable network address.
A distributed hash table works by each peer on the network maintaining a
large number of live and active connections to computers such that the
distribution of connections to computers distant by the distributed hash
table metric is approximately uniform by distance, which distance is for
Kademlia the $log_2$ of the exclusive-or between his hash and your hash.
And when you want to connect to an arbitrary computer, you asked the
computers that are nearest in the space to the target for their connections
that are closest to the target. And then you connect to those, and ask the
same question again.
This works if each computer has approximately the same number of connections
close to it by a metric as distant from it by some metric. So it will be
connected to almost all of the computers that are nearby to it by that metric.
In the course of this operation, you acquire more and more active
connections, which you purge from time to time to keep the total number
of connections reasonable and the distribution approximately uniform by the
metric of distance used.
The reason that the Kademlia distributed hash table cannot work in the
face of enemy action, is that the shills who want to prevent something
from being found create a hundred entries with a hash close to their target
by Kademlia distance, and then when your search brings you close to
target, it brings you to a shill, who misdirects you. Using social network
distance resists this attack.
The messages of the people you are following are likely to be in a
relatively small number of repositories, even if the total number of
repositories out there is enormous and the number of hashes in each
repository is enormous, so this algorithm and data structure will scale, and
the responses to that thread that they have approved, by people you are not
following, will be commits in that repository, that, by pushing their latest
response to that thread to a public repository, they did the equivalent of a
git commit and push to that repository.
Each repository contains all the material the poster has approved, resulting
in considerable duplication, but not enormous duplication, approved links and
reply-to links but not every spammer, scammer, and
shill in the world can fill your feed with garbage.
### Kademlia in social space
The vector of an identity is $+1$ for each one bit, and $-1$ for each zero bit.
We don't use the entire two hundred fifty six dimensional vector, just
enough of it that the truncated vector of every identity that anyone might
be tracking has a very high probability of being approximately orthogonal
to the truncated vector of every other identity.
We do not have, and do not need, an exact consensus on how much of the
vector to actually use, but everyone needs to use roughly the same amount
as everyone else. The amount is adjusted according to what is, over time,
needed, by each identity adjusting according to circumstances, with the
result that over time the consensus adjusts to what is needed.
Each party indicates what entities he can provide a direct link to by
publishing the sum of the vectors of the parties he can link to - and also
the sum of the their sums, and also the sum of their ... to as many deep as
turns out to be needed in practice, which is likely to two or three such
vector sums, maybe four or five. What is needed will depend on the
pattern of tracking that people engage in in practice.
If everyone behind a firewall or with an unstable network address arranges
to notify a well known peer with stable network address whenever his
address changes, and that peer, as part of the arrangement, includes him in
that peer's sum vector, the number of well known peers with stable
network address offering this service is not enormously large, they track
each other, and everyone tracks some of them, we only need the sum and
the sum of sums.
When someone is looking to find how to connect to an identity, he goes
through the entities he can connect to, and looks at the dot product of
their sum vectors with target identity vector.
He contacts the closest entity, or a close entity, and if that does not work
out, contacts another. The closest entity will likely be able to contact
the target, or contact an entity more likely to be able to contact the target.
* the identity vector represents the public key of a peer
* the sum vector represents what identities a peer thinks he has valid connection information for.
* the sum of sum vectors indicate what identities that he thinks he can connect to think that they can connect to.
* the sum of the sum of the sum vectors ...
A vector that provides the paths to connect to a billion entities, each of
them redundantly through a thousand different paths, is still sixty or so
thirty two bit signed integers, distributed in a normal distribution with a
variance of a million or so, but everyone has to store quite a lot of such
vectors. Small devices such as phones can get away with tracking a small
number of such integers, at the cost of needing more lookups, hence not being
very useful for other people to track for connection information.
To prevent hostile parties from jamming the network by registering
identities that closely approximate identities that they do not want people
to be able to look up, we need the system to work in such a way that
identities that lots of people want to look up tend to heavily over
represented in sum of sums vectors relative to those that no one wants to
look up. If you repeatedly provide lookup services for a certain entity,
you should track that entity that had last stable network address on the
path that proved successful to the target entity, so that peers that
provide useful tracking information are over represented, and entities that
provide useless tracking information are under represented.
If an entity makes publicly available network address information for an
identity whose vector is an improbably good approximation to an existing
widely looked up vector, a sybil attack is under way, and needs to be
ignored.
To be efficient at very large scale, the network should contain a relatively
small number of large well connected devices each of which tracks the
tracking information of large number of other such computers, and a large
number of smaller, less well connected devices, that track their friends and
acquaintances, and also track well connected devices. Big fanout on on the
interior vertices, smaller fanout on the exterior vertices, stable identities
on all devices, moderately stable network addresses on the interior vertices,
possibly unstable network addresses on the exterior vertices.
If we have a thousand identities that are making public the information
needed to make connection to them, and everyone tracks all the peers that
provide third party look up service, we need only the first sum, and only
about twenty dimensions.
But if everyone attempts to track all the connection information network
for all peers that provide third party lookup services, there are soon going
to be a whole lot shill, entryist, and spammer peers purporting to provide
such services, whereupon we will need white lists, grey lists, and human
judgement, and not everyone will track all peers who are providing third
party lookup services, whereupon we need the first two sums.
In that case random peer searching for connection information to another
random peer first looks to through those for which has good connection
information, does not find the target. Then looks through for someone
connected to the target, may not find him, then looks for someone
connected to someone connected to the target and, assuming that most
genuine peers providing tracking information are tracking most other
peers providing genuine tracking information, and the peer doing the
search has the information for a fair number of peers providing genuine
tracking information, will find him.
Suppose there are a billion peers for which tracking information exists. In
that case, we need the first seventy or so dimensions, and possibly one
more level of indirection in the lookup (the sum of the sum of the sum of
vectors being tracked). Suppose a trillion peers, then about the first eighty
dimensions, and possibly one more level of indirection in the lookup.
That is a quite large amount of data, but if who is tracking whom is stable,
even if the network addresses are unstable, updates are infrequent and small.
If everyone tracks ten thousand identities, and we have a billion identities
whose network address is being made public, and million always up peers
with fairly stable network addresses, each of whom tracks one thousand
unstable network addresses and several thousand other peers who also
track large numbers of unstable addresses, then we need about fifty
dimensions and two sum vectors for each entity being tracked, about a
million integers, total -- too big to be downloaded in full every time, but
not a problem if downloaded in small updates, or downloaded in full
infrequently.
But suppose no one specializes in tracking unstable network addresses.
If your network address is unstable, you only provide updates to those
following your feed, and if you have a lot of followers, you have to get a
stable network address with a stable open port so that you do not have to
update them all the time. Then our list of identities whose connection
information we track will be considerably smaller, but our level of
indirection considerably deeper - possibly needing six or so deep in sum of
the sum of ... sum of identity vectors.
I am now writing up a better design.
## Private messaging
@ -707,7 +531,9 @@ private room.
The infrastructure proposed in [Anonymous Multi-Hop Locks] for lightning
network transactions is also private room infrastructure, so we should
implement private rooms on that model.
implement private rooms on that model. Indeed we have to, in order to implement
a gateway between our crypto currency and bitcoin lightning, without which
we cannot have a liquidity event for our startup.
In order to do money over a private room, you need a `reliable broadcast channel`,
so that Bob cannot organize a private room with Ann and Carol,
@ -882,8 +708,8 @@ regularly sends them the latest keys.
# Blockchains
You need a cryptocurrency like Bitcoin, but Bitcoin is too readily traceable.
Wasabi wallet has a cure for bitcoin traceability, but it is not easy to use
that Wasabi capability, and most people will not, despite the very nice user
Sparrow wallet has a cure for bitcoin traceability, but it is not easy to use
that Sparrow capability, and most people will not, despite the very nice user
interface. The lightning network over bitcoin could fix those problems, and
also overcome Bitcoins scaling limit, but I dont think much of the current
implementation of the lightning network. The latest bitcoin upgrade,
@ -930,7 +756,7 @@ cannot be integrated into the buyer's bookkeeping.
The complexity and difficulty is because that we are using one rather
insecure channel for the metadata about the transaction, and a different
channel for the transaction. (This is also a gaping and gigantic security
hole, even if you are using Monaro.) For a truly usable crypto currency
hole, even if you are using Monero.) For a truly usable crypto currency
payment mechanism, transactions and metadata have to go through the
same channel, which means human to human communication through
wallets - means that you need to be able to send human to human
@ -1089,7 +915,7 @@ pressure. We start by enabling discussion groups, which will be an
unprofitable exercise, but the big money is in enabling business.
Discussion groups are a necessary first step.
## Cold Start Problem and Metcalf's law.
## Cold Start Problem and Metcalf's law
Metcalf's law: The usefulness of a network to any one person
considering joining it depends on how many people have already joined.
@ -1099,50 +925,67 @@ Cold Start Problem: If no one is on the network, no one wants to be on it.
The value of joining a network depends on the number of other people already using that network.
So if there are big competing networks, no one wants to join the new network.
To solve the cold start problem, you have to start by solving a
very specific problem for a very specific set of people,
and expand from there.
No one is going to want to be the first to start a Sovcorp. And until people do,
programmers are not going to get paid except by angel investors expecting that
people will start Sovcorps. So to get a foot in the door, we have to cherry pick
and snipe little areas where the big network is failing, and once we have a foot
in the door, then we can get rolling.
### failure of previous altcoins to succeed in this strategy
## elevator pitch:
Grab a privacy niche where Monero cannot operate, because of its inherent
lack of smart contracts and social networking capability, due to the nature of its
privacy technology.
Then take over Monero's niche by being a better Monero.
Then take over Bitcoin and Nostr's niche by being a better Bitcoin and a better Nostr,
because recursive snarks can do lots of things, such as smart contracts, better than they can.
## failure of previous altcoins to solve Cold Start.
During the last decade, numberless altcoins have attempted to compete with
Bitcoin, and they all got crushed, because the differences between them and
Bitcoin really did not matter that much.
Bitcoin really did not matter that much.
They failed to differentiate themselves from bitcoin. They could
not find a niche in which to start.
Ether did OK, because they supported smart contracts and bitcoin did not,
but now that bitcoin has some limited smart contract capability,
they are going down too, because bitcoin has the big network advantage.
Ether did OK, because they supported smart contracts and Bitcoin did not,
but now that Bitcoin has some limited smart contract capability,
they are going down too, because Bitcoin has the big network advantage.
Being the biggest, it is far more convertible into goods and services
than any other. Which means ether is doomed now that bitcoin is
than any other. Which means Ether is doomed now that Bitcoin is
doing smart contracts.
Monero did OK, because it is the leading privacy coin. It has a niche, but
cannot break out of the not very large niche. Because its privacy mechanism means it is
a worse bitcoin than bitcoin in other respects.
a worse Bitcoin than Bitcoin in other respects.
And the cold start problem means we cannot directly take over that niche either.
But our proposed privacy mechanism means we have a tech advantage over both
Bitcoin and Monero - better contract capability than Bitcoin or Ether, because
a snark can prove fulfillment of a contract that without burdening the network
a snark can prove fulfillment of a contract without burdening the network
with a costly proof of fulfillment, and without revealing everything to the
network, and without the rest of the network needing to know what that
contract is or be able to evaluate it. Because of its privacy mechanism,
network, and without the rest of the network needing to know what that there was
a contract, what that contract is nor to be able to evaluate it.
Because of its privacy mechanism,
Monero cannot do contracts, which prevents atomic exchange between Monero
and Bitcoin, and prevents Monero from doing a lightning network that would
enable fast atomic exchange between itself and other networks.
enable fast atomic exchange between itself and Bitcoin lightning.
So if we get a niche, get differentiation from Monero and Bitcoin,
we can then break out of that niche and eat Monero, being a better
privacy coin, a better Monero, and from the Monero niche eat Bitcoin,
being, unlike Monero, a better Bitcoin.
## Solving the cold start problem
### Bitmessage
The lowest hanging fruit of all, (because, unfortunately, there is
@ -1154,16 +997,17 @@ on discussion groups, but is widely used
because it does not reveal the sender or recipient's IP address.
In particular, it is widely used for crypto currency payments.
So next step is to integrate payment mechanisms, which brings us
a little closer to the faint smell of money.
So next step, after capturing its abandoned market niche,
is to integrate payment mechanisms, in particular to integrate
core lightning, which brings us a little closer to the faint smell of money.
### Integrating money.
### Integrating money
So we create a currency. But because it will be created on sovcorp model
So we create our own currency. But because it will be created on sovcorp model
people cannot obtain it by proof of work - they have to buy it. Which
will require gateways between bitcoin lightning and the currency supported by
by the network, supporting atomic lightning exchange
and gateways between the conversations on the network and nostr.
and gateways between the conversations on the privacy network and the conversations on nostr.
# Development sequence
@ -1218,7 +1062,7 @@ But we can do the important things, which are social media and blockchain.
With social networking on top of this protocol, we can then do blockchain
and crypto currency. We then do trades between crypto currencies on the
blockchain, bypassing the regulated quasi state exchanges, which trades
are safe provided a majority of the stake of peers on the blockchain that is
are safe provided a majority of the shares of peers on the blockchain that is
held by peers holding two peer wallets, one in each crypto currency being
exchanged, are honest.
@ -1248,7 +1092,7 @@ blockchain.
Information wants to be free, but programmers need to be paid. We want
the currency, the blockdag, to be able to function as a corporation so that it
can pay the developers to improve the software in ways likely to add value
to the stake.
to the shares.
# Many sovereign corporations on the blockchain
@ -1371,9 +1215,9 @@ of truckers who each owned their own truck. The coup was in large
State incorporated corporations derive their corporateness from the
authority of the sovereign, but a proof of share currency derives its
corporateness from the cryptographically discovered consensus that gives
each stakeholder incentive to go along with the cryptographically
each shareholder incentive to go along with the cryptographically
discovered consensus because everyone else is going with the consensus,
each stakeholder playing by the rules because all the other stakeholders
each shareholder playing by the rules because all the other shareholders
play by those rules.
Such a corporation is sovereign.

View File

@ -3,7 +3,7 @@ title: >-
Sox Accounting
...
Accounting and bookkeeping is failing in an increasingly low trust world
of ever less trusting and ever less untrustworthy elites, and Sarbanes-Oxley
of ever less trusting and ever less trustworthy elites, and Sarbanes-Oxley
accounting (Sox) is an evil product of this evil failure.
Enron was a ponzi scam that pretended to be in the business of buying and
@ -53,7 +53,7 @@ And suddenly people stopped being willing to pay Enron cash on the
barrelhead for goods, suddenly stopped being willing to sell Enron goods
on credit. Suddenly Enron could no longer pay its employees, nor its
landlord. Its employees stopped turning up, its landlord chucked their
furniture out into the street.
stuff out into the street.
Problem solved.
@ -83,7 +83,7 @@ disreputable accounting students and the investors who had hired them.
So there was in practice a great deal of overlap between very respectable
accountants asking government to force business to use their services and
naive idiots asking government to force business and accountants to
naïve idiots asking government to force business and accountants to
behave in a more trustworthy manner.
And so, we got a huge bundle of accounting regulation, Sarbanes-Oxley.
@ -112,6 +112,94 @@ feet, the things in your hands, and what is before your eyes, with the result
that they tend to track the creation of holiness and ritual purity, rather than
the creation of value.
From the seventeenth century to the nineteenth, bookkeeping was explicitly Christian.
The immutable append only journal reflected God's creation, the balance of the books
was an earthly reflection of the divine scales of justice and the symmetry of Gods creation.
When the dust settled over the Great Minority Mortgage Meltdown it became apparent that
the books of the financial entities involved had little connection to God's creation.
The trouble with postmodern accounting is that what goes into the asset column,
what goes into the liability column, and what goes into the equity column
bears little relationship to what is actually an asset, a liability, or equity,
little relationship to God Creation.
(Modernity begins in the seventeenth century, with joint stock
publicly traded limited liability corporation, the industrial revolution,
and the scientific revolution. Postmodernity is practices radically different from,
and fundamentally opposed to, the principles of the that era. Such as detaching
the columns of the books from the real things that correspond to those names. If
science is done by consensus, rather than the scientific method described in "the
skeptical chymist", it is postmodern science, and if the entries in the books do not
actually correspond to real liability, real assets, and real equity,
it is postmodern bookkeeping. Postmodern science is failing to produce the results
that modern science produced, and postmodern bookkeeping is failing
to produce the results that modern bookkeeping produced.)
The state has been attacking the cohesion of the corporation just as it has been attacking
the cohesion of the family. Modern corporate capitalism is incompatible with SoX,
because if your books are detached from reality.
lies that hostile outsiders demand that you believe,
the corporation has lost that which makes it one person.
When the books are a lie imposed on you by hostile outsiders you lose cohesion around profit,
making things, buying, selling, and satisfying the customer,
and instead substitute cohesion around gay sex, minority representation, and abortion rights.
If the names of the columns do not mean what they say, people do not care about the effects
of their actions on those columns.
Notice Musk's tactic of making each of his enterprises a moral crusade,
and also of giving them a legal form that evades SoX accounting. Which legal form does
not permit their shares to be publicly traded.
To believe the corporation is real, for the many people of the corporation to believe the
corporation is one person, the books have to reflect reality,
and reality has to have moral authority.
The Christian doctrine of the Logos gives reality moral authority,
since reality is a manifestation of will of God.
In the Enron scandal, the books were misleading to the tune of about seventy billion dollars,
in that they creatively moved their debts incurred by buying things on credit off the books,
and that was the justification for SoX accounting.
Which is very effective in preventing people from moving debts off the books.
In the Great Minority Mortgage Meltdown, the SoX books were misleading to the tune
of about seven *trillion* dollars, about one hundred times as much money as the Enron scandal,
largely due to the fact that the people responsible for paying the mortgages could not be found or identified,
frequently had about as much id and evidence of actual existence as a democratic party voter,
and many of them probably did not exist, and many of the properties were not only grossly overvalued,
but pledged to multiple mortgages, or were impossible to identify,
and some of them may not have existed either. It usually said that the losses in the
Great Minority Mortgage Meltdown were the result of housing prices being unrealistically inflated,
but they were unrealistically inflated because people who, if they existed at all,
had no income, job, or assets, were buying mansions at inflated prices.
To 2005, it looks like poor people who actually existed were buying
mansions they could not possibly afford at market prices, but market prices were artificially inflated
because of this artificial demand. From 2005 to 2007, it looks more like people who did not actually exist
were buying houses at prices far above market price and market prices were irrelevant.
And that the alleged sale price of the property underlying
the mortgage had been inflated far above realizable value,
and often far above even what the prices had been at the peak
of the bubble in 2005 was not the only problem. The creditors frequently
had strange difficulty actually finding the houses.
A person who actually exists and actually wants the house is going to sign the papers at a location
near the house. Towards the end, most of the mortgages were flipped, and the alleged flippers signed
the papers in big financial centres far from the alleged location of the alleged houses.
The mortgagees did not demand id, because id racist. Much as demanding id is racist when
you want to ask voters for id, but not when you want entry to a Democratic party gathering.
Enron's books implied that non-existent entities were responsible for
paying the debts that they had incurred
through buying stuff on credit, thus moving the debt off the books. In the Great Minority Mortgage Meltdown,
the banks claimed that people who, if they existed at all, could not possibly pay,
owed them enormous amounts of money.
Sox has prevented businesses from moving real debts off the books. But it failed spectacularly
at preventing businesses from moving unreal debts onto the books. In the Great Minority Mortgage
Meltdown, the books were misleading because of malice and dishonesty, but even if you are doing your
best to make SoX books track the real condition of your business, they don't. They track a paper
reality disconnected from reality and thus from actual value.
But the biggest problem with Sarbanes-Oxley is not that it has failed to
force publicly traded companies and their accountants to act in
a trustworthy manner, but that it has forced them to act in an untrustworthy
@ -124,7 +212,7 @@ that when it blesses the books it has prepared as Sox compliant, the
regulators will pretend to believe. Which is great for the very respectable
accountants, who get paid a great deal of money, and great for the
regulators, who get paid off, but is terrible for businesses who pay a great
deal of money and do not in fact get books that accurately tell
deal of money and do not in fact get books that accurately tell
management how the business is doing, and considerably worse for
startups trying to go public, since the potential investors know that the
books do not accurately tell the investors how the business is doing.

View File

@ -2,6 +2,9 @@
title:
The Usury problem
...
# Usury in Christianity
The Christian concept of usury presupposes that capitalism was divinely
ordained in the fall, and that we are commanded to use capital productively
and wisely.
@ -41,6 +44,8 @@ production.
And if things dont work out, he is free and clear if he returns the cattle or
the house.
# Usury in Islam
The Islamic ban on usury is similar to the Christian ban, but their frame is
rather that the lender shares the risk, rather than that the lender is charging
rental on a productive property. The dark enlightenment frame is game
@ -59,6 +64,8 @@ collect interest on the mortgage, but then there is a housing slump, the
mortgagor returns the house in good order and condition, but in the middle of
housing slump, the mortgagee is sol under the old Christian laws, and the mortgagor, though now houseless, is free and clear of debt.
# How everything went to hell
Well, lenders did not like that. They wanted their money even if things did
not work out, and they wanted to be able to lend money to someone to throw a
big party, someone who probably did not understand the concept of compound
@ -68,6 +75,8 @@ And the Jews of course operated by different rules, and Kings would borrow
from the Jews, and then give the Jews exemption from the Christian laws, so
that they could lend money against the person to Christians.
## Term Transformation
And then, things got messier with fractional reserve banking.
People want to lend short and borrow long, borrow money with fixed schedule
@ -94,6 +103,8 @@ start using banknotes as money, bits of paper backed by claims against real
property, if the lending is more or less Christian, and claims against real
people, if it is not all that Christian.
## Leading to fiat money
And then, one day it rains on everyone, and everyone hits up the bank at
the same time for the money they had stashed away for a rainy day, and
you have a financial crisis.
@ -108,6 +119,8 @@ government notes backed by claims against taxpayers.
And here we are. Jewish rules, fiat money.
# Cryptocurrency and Usury
Well, how does cryptocurrency address this? It is backed by absolutely
nothing at all.

View File

@ -55,10 +55,10 @@ time to flee bitcoin.](http://reaction.la/security/bitcoin_vulnerable_to_currenc
Blockchain analysis is a big problem, though scaling is rapidly becoming a
bigger problem.
Monaro and other privacy currencies solve the privacy problem by padding
Monero and other privacy currencies solve the privacy problem by padding
the blockchain with chaff, to defeat blockchain analysis, but this
greatly worsens the scaling problem. If Bitcoin comes under the blood
diamonds attack, Monaro will be the next big crypto currency, but Monaro
diamonds attack, Monero will be the next big crypto currency, but Monero
has an even worse scaling problem than Bitcoin.
The solution to privacy is not to put more data on the blockchain, but less -
@ -221,7 +221,7 @@ Privacy, security, efficiency, and scalability are mutually opposed if if one at
The most efficient way is obviously a single central authority deciding everything, which is not very private nor secure, and has big problems with scalability.
If a transaction is to be processed by many people, one achieves privacy, as with Monaro, by cryptographically padding it with a lot of misinformation, which is contrary to efficiency and scalability.
If a transaction is to be processed by many people, one achieves privacy, as with Monero, by cryptographically padding it with a lot of misinformation, which is contrary to efficiency and scalability.
The efficient and scalable way to do privacy is not to share the
information at all. Rather we should arrange matters so that

View File

@ -151,7 +151,7 @@ work, rather than proof of share, and the states computers can easily mount
a fifty one percent attack on proof of work. We need a namecoin like system
but based on proof of share, rather than proof of work, so that for the state
to take it over, it would need to pay off fifty one percent of the
stakeholders and thus pay off the people who are hiding behind the name
shareholders and thus pay off the people who are hiding behind the name
system to perform untraceable crypto currency transactions and to speak the
unspeakable.

View File

@ -3,7 +3,7 @@
width="width: 100%" height="100%"
viewBox="-2500 -2400 4870 4300"
style="background-color:#ddd">
<g fill="none" font-family="Georgia" font-size="200"
<g fill="none" font-family="DejaVu Serif, serif" font-size="200"
font-weight="400"
>
<path stroke="#d8d800" stroke-width="36.41"

Before

Width:  |  Height:  |  Size: 2.5 KiB

After

Width:  |  Height:  |  Size: 2.5 KiB

View File

@ -2,7 +2,7 @@
xmlns="http://www.w3.org/2000/svg"
width="width: 100%" height="100%"
viewBox="-2 -2 4 4">
<g fill="#0F0" font-family="Georgia" font-size="2.4">
<g fill="#0F0" font-family="DejaVu Serif, serif" font-size="2.4">
<g id="h3">
<g id="h2">
<path id="flame" stroke="#0D0" stroke-width="0.05"

Before

Width:  |  Height:  |  Size: 688 B

After

Width:  |  Height:  |  Size: 700 B

View File

@ -1,7 +1,7 @@
body {
max-width: 30em;
margin-left: 1em;
font-family: "Georgia, Times New Roman", Times, serif;
font-family:"DejaVu Serif", "Georgia", serif;
font-style: normal;
font-variant: normal;
font-weight: normal;
@ -16,8 +16,6 @@ td, th {
padding: 0.5rem;
text-align: left;
}
code{white-space: pre-wrap;
}
span.smallcaps{font-variant: small-caps;
}
span.underline{text-decoration: underline;
@ -45,13 +43,17 @@ td, th {
text-align: left;
}
pre.terminal_image {
font-family: 'Lucida Console';
background-color: #000;
background-color: #000;
color: #0F0;
font-size: 75%;
white-space: no-wrap;
}
overflow: auto;
}
pre.terminal_image > code { white-space: pre; position: relative;
}
pre.text { overflow: auto; }
pre.text > code { white-space: pre; }
code {font-family: "DejaVu Sans Mono", "Lucida Console", "sans-serif";}
* { box-sizing: border-box;}
.logo-header {

View File

@ -1,3 +1,10 @@
body {
max-width: 30em;
margin-left: 1em;
font-family:"DejaVu Serif", "Georgia", serif;
font-style: normal;
font-variant: normal;
font-weight: normal;
font-stretch: normal;
font-size: 100%;
}

View File

@ -106,7 +106,7 @@ their name and assets in a frequently changing public key. Every time
money moves from the main chain to a sidechain, or from one sidechain to
another, the old coin is spent, and a new coin is created. The public key on
the mainchain coin corresponds to [a frequently changing secret that is distributed]
between the peers on the sidechain in proportion to their stake.
between the peers on the sidechain in proportion to their share.
The mainchain transaction is a big transaction between many sidechains,
that contains a single output or input from each side chain, with each
@ -145,7 +145,7 @@ necessary that what we do implement be upwards compatible with this scaling desi
## proof of share
Make the stake of a peer the value of coins (unspent transaction outputs)
Make the share of a peer the value of coins (unspent transaction outputs)
that were injected into the blockchain through that peer. This ensures that
the interests of the peers will be aligned with the whales, with the interests
of those that hold a whole lot of value on the blockchain. Same principle

View File

@ -0,0 +1,218 @@
---
title:
Core lightning in Debian
sidebar: false
...
Building lightning on Debian turned into a dead end. I just flat could not build core-lightning on Debian, due to python incompatibilities with
the managed python environment.
Bottom line is that the great evil of python is that building installs
for python projects is a nightmare. It has its equivalent of dll hell.
So nothing ever works on any system other than the exact system it was
built on.
The great strength of lua is that every lua program runs its own lua
interpreter with its own lua libraries.
Instead we have python version hell and pip hell.
So, docker.
# Failed attempt to us Docker lightning
The docker container as supplied instantly terminates with an error
because it expects to find /root/.lightning/bitcoin which of course
does not exist inside the docker container, and you cannot externally
muck with the files inside a docker container, except by running
commands that the container supports.
So to run docked lightning, a whole lot of configuration information needs
to be supplied, which is nowhere explained.
You cannot give it command line parameters, so you have to set them
in environment variables, which do not seem to be documented,
and you have to mount the directories external to your
docker container
```bash
docker run -it --rm \
--name clightning \
-e LIGHTNINGD_NETWORK=bitcoin \
-e LIGHTNINGD_RPC_PORT=10420 \
-v $HOME/.lightning:/root/.lightning \
-v $HOME/.bitcoin:/root/.bitcoin \
elementsproject/lightningd:v23.11.2-amd64 \
lightningd --network=bitcoin --log-level=debug
```
docker run -it --rm \
--name clightning \
-e LIGHTNINGD_NETWORK=bitcoin \
-e LIGHTNINGD_RPC_PORT=10420 \
-v $HOME/.lightning:/root/.lightning \
-v $HOME/.bitcoin:/root/.bitcoin \
elementsproject/lightningd:v23.11.2-amd64 \
lightningd --network=bitcoin --log-level=debug
```
docker run -it --rm \
--name clightning \
-e LIGHTNINGD_NETWORK=bitcoin \
-e LIGHTNINGD_RPC_PORT=10420 \
-v $HOME/.lightning:/root/.lightning \
-v $HOME/.bitcoin:/root/.bitcoin \
elementsproject/lightningd:v22.11.1 \
lightningd --help
docker run -it --rm \
--name clightning \
-e LIGHTNINGD_NETWORK=bitcoin \
-e LIGHTNINGD_RPC_PORT=10420 \
elementsproject/lightningd:v22.11.1 \
lightningd --help
The docker container can do one thing only: Run lightning with no
command arguments.
Which is great if I have everything setup, but in order
to have everything setup, I need documentation and examples
which do not seem readily applicable to lightning inside
docker. I will want to interact with my lighning that is
inside docker with lightning-cli, which lives outside
docker, and because of python install hell, I will want
to use plugins that live inside docker, with which I will
interact using a lightning-cli that lives outside docker.
But trouble is docker comes with a pile of scripts and plugins, and
the coupling between these and the docker image is going to need
a PhD in dockerology.
The docker image just dies, because it expects no end of stuff that
just is not there.
## Install docker
```bash
# remove any dead and broken obsolete dockers that might be hanging around
for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install -y pinentry-gnome3 tor parcimonie xloadimage scdaemon pinentry-doc
sudo apt-get install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian bookworm stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
cat /etc/apt/sources.list.d/docker.list
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker ${USER}
docker run hello-world
```
## install lightning within docker
Check available images in [docker hub](https://hub.docker.com/r/elementsproject/lightningd/tags){target="_blank"}
```bash
docker pull elementsproject/lightningd:v23.11.2-amd64
docker images
```
# Failed attempt to build core lightning in Debian
## Set up a non Debian python
Core lightning requires `python` and `pip` Debian does not like outsiders mucking with its fragile and complicated python.
So you have to set up a virtual environment in which you are free
to do what you like without affecting the rest of Debian:
The double braces `«»` mean that you do not copy the text inside
the curly braces, which is only there for example.
You have to substitute your own python version, mutas mutandis,
which you learned by typing `python3 -V`
```bash
sudo apt update
sudo apt -y full-upgrade
sudo apt install -y python3-pip
sudo apt install -y build-essential libssl-dev libffi-dev
sudo apt install -y python3-dev python3-venv
python3 -V
mkdir mypython
cd mypython
python«3.11» -m venv my_env
```
You now have your own python environment, which you activate with the command
```bash
source my_env/bin/activate
```
Within this environment, you no longer have to use python3 and pip3, nor should you use them.
You just use python and pip, which means that all those tutorials
on python projects out there on the web now work
All your python stuff that is not part of Debian managed python should be inside your `mypython` directory, and when you leave it
```bash
deactivate
```
## building core lightining
Outside your python environment:
```bash
sudo apt-get install -y \
autoconf automake build-essential git libtool libsqlite3-dev \
python3 python3-pip net-tools zlib1g-dev libsodium-dev gettext \
python3-json5 python3-flask python3-gunicorn \
cargo rustfmt protobuf-compiler
```
Then, inside your mypython directory, activate your python environment and install the python stuff
```bash
source my_env/bin/activate
pip install --upgrade pip
pip install poetry
pip install flask-cors flask_restx pyln-client flask-socketio \
gevent gevent-websocket mako
pip install -r plugins/clnrest/requirements.txt
```
The instructions on the web for ubuntu say `--user` and `pip3`, but on Debian, we accomplish the equivalent through using a virtual environment.
`--user` is Ubuntus way of keeping your custom python separate,
but Debian insists on a more total separation.
Which means that anything that relies heavily on custom python has
to be inside this environment, so to be on the safe side, we are
going to launch core lighting with a bash script that first goes
into this environment.
`poetry` is an enormous pile of python tools, which core lightning uses,
and outside this environment, probably will not work.
Inside your environment:
```bash
git clone https://github.com/ElementsProject/lightning.git
cd lightning
git tag
# find the most recent release version
# mutas mutandis
git checkout «v23.11.2»
./configure
make
# And then it dies
```

View File

@ -30,6 +30,17 @@ And a gpt partition table for a linux system should look something like this
To build a cross platform application, you need to build in a cross
platform environment.
If you face grief launching an installer for your virtual box device
make sure the virtual network is bridged mode
and get into the live cd command line
```bash
sudo -i
apt-get update
apt-get install debian-installer-launcher
debian-installer-launcher --text
```
## Setting up Ubuntu in VirtualBox
Having a whole lot of different versions of different machines, with a
@ -68,14 +79,46 @@ the OS in ways the developers did not anticipate.
## Setting up Debian in VirtualBox
### virtual box Debian install bug
Debian 12 (bookworm) install fails on a UEFI virtual disk.
The workaround is to install a base Debian 11 system as UEFI
in Virtual Box. Update /etc/apt/sources.list from Bullseye
to Bookworm. Run apt update and apt upgrade.
After that you have a functioning Debian 12 UEFI Virtual machine.
### server in virtual box
If it is a server and you are using nfs, don't need guest additions and therefore
do not need module-module assistant, and may not need the build stuff.
```bash
sudo -i
apt-get -qy update
apt-get -qy full-upgrade
apt-get -qy install dnsutils curl sudo dialog rsync zstd avahi-daemon nfs-common
```
To access disks on the real machine, create the empty directory `«/mytarget»` directory and add the line
```bash
«my-nfs-server»:/«my-nfs-subdirectory» «/mytarget» nfs4
```
to `/etc/fstab`
to test that it works without rebooting: `mount «/mytarget»`
### Guest Additions
If you are running it through your local machine, you want to bring up
the gui and possibly the disk access through guest additions
To install guest additions on Debian:
```bash
sudo -i
apt-get -qy update && apt-get -qy install build-essential module-assistant
apt-get -qy install git dnsutils curl sudo dialog rsync zstd
apt-get -qy install git dnsutils curl sudo dialog rsync zstd avahi-daemon nfs-common
apt-get -qy full-upgrade
m-a -qi prepare
apt autoremove -qy
@ -122,7 +165,7 @@ autologin-user-timeout=0
nano /etc/default/grub
```
The full configuration built by `grub2-mkconfig` is built from the file `/etc/default/grub`, the file `/etc/fstab`, and the files in `/etc/grub.d/`.
The full configuration built by `update-grub` is built from the file `/etc/default/grub`, the file `/etc/fstab`, and the files in `/etc/grub.d/`.
Among the generated files, the key file is `menu.cfg`, which will contain a boot entry for any additional disk containing a linux kernel that you have attached to the system. You might then be able to boot into that other linux, and recreate its configuration files within it.
@ -143,10 +186,26 @@ Go to go to system / control center/ Hardware/ Power Management and turn off the
In the shared directory, I have a copy of /etc and ~.ssh ready to roll, so I just go into the shared directory copy them over, `chmod` .ssh and reboot.
Alternatively [manually set them](#setting-up-ssh) then
```bash
chmod 700 ~/.ssh && chmod 600 ~/.ssh/*
```
### make the name available
You can manually edit the hosts file, or the `.ssh/config` file, which is a pain if you have a lot of machines, or fix your router to hand out
names, which cheap routers do not do and every router is different.
Or, if it is networked in virtual box bridged mode,
```bash
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install avahi-daemon
```
Which daemon will multicast the name and IP address to every machine on the network so that you can access it as «name».local
### Set the hostname
check the hostname and dns domain name with
@ -209,7 +268,7 @@ Change the lower case `h` in `PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$
I also like the bash aliases:
```text
alias ll="ls -hal"
alias ll="ls --color=auto -hal --time-style=iso"
mkcd() { mkdir -p "$1" && cd "$1"; }
```
@ -220,13 +279,42 @@ Setting them in `/etc/bash.bashrc` sets them for all users, including root. But
The line for in fstab for optical disks needs to given the options `udf,iso9660 ro,users,auto,nofail` so that it automounts, and any user can eject it.
Confusingly, `nofail` means that it is allowed to fail, which of course it will
if there is nothing in the optical drive.
if there is nothing in the optical drive. If you have `auto` but not `nofail` the system
will not boot into multi-user let along gui unless there is something in the drive.
You get dropped into single user root logon (where you will see an error message
regarding the offending drive and can edit the offending fstab).
`'user,noauto` means that the user has to mount it, and only the user that
`user,noauto` means that the user has to mount it, and only the user that
mounted it can unmount it. `user,auto` is likely to result in root mounting it,
and if `root` mounted it, as it probably did, you have a problem. Which
problem is fixed by saying `users` instead of `user`
## Setting up Ubuntu in Virtual box
The same as for Debian, except that the desktop addition lacks openssh-server, it already has avahi-daemon to make the name available, and the install program will setup auto login for you.
```bash
sudo apt install openssh-server.
```
Then ssh in
### Guest Additions
```bash
sudo -i
apt-get -qy update && apt-get -qy install build-essential dkms
apt-get -qy install git dnsutils curl sudo dialog rsync zstd
apt-get -qy full-upgrade
apt autoremove -qy
```
Then you click on the autorun.sh in the cdrom through the gui.
```bash
usermod -a -G vboxsf cherry
```
## Setting up OpenWrt in VirtualBox
OpenWrt is a router, and needs a network to route. So you use it to route a
@ -266,15 +354,11 @@ This does not necessarily correspond to order in which virtual drives have
been attached to the virtual machine
Be warned that the debian setup, when it encounters multiple partitions
that have the same UUID is apt to make seemingly random decisions as to which partitions to mount to what.
that have the same UUID (because one system was cloned from the other)
is apt to make seemingly random decisions as to which partitions to mount to what. So, you should boot from a live
cd-rom, and attach the system to be manipulated to that.
The problem is that virtual box clone does not change the partition UUIDs. To address this, attach to another linux system without mounting, change the UUIDs with `gparted`. Which will frequently refuse to change a UUID because it knows
better than you do. Will not do anything that would screw up grub.
`boot-repair` can fix a `grub` on the boot drive of a linux system different
from the one it itself booted from, but to boot a cdrom on an oracle virtual
box efi system, cannot have anything attached to SATA. Attach the disk
immediately after the boot-repair grub menu comes up.
This also protects you from accidentally manipulating the wrong system.
The resulting repaired system may nonetheless take a strangely long time
to boot, because it is trying to resume a suspended linux, which may not
@ -415,15 +499,15 @@ For example:
```terminal_image
root@example.com:~#lsblk -o name,type,size,fsuse%,fstype,fsver,mountpoint,UUID
NAME TYPE SIZE FSTYPE MOUNTPOINT UUID
sda disk 20G
├─sda1 part 33M vfat /boot/efi E470-C4BA
├─sda2 part 3G swap [SWAP] 764b1b37-c66f-4552-b2b6-0d48196198d7
└─sda3 part 17G ext4 / efd3621c-63a4-4728-b7dd-747527f107c0
sdb disk 20G
├─sdb1 part 33M vfat E470-C4BA
├─sdb2 part 3G swap 764b1b37-c66f-4552-b2b6-0d48196198d7
└─sdb3 part 17G ext4 efd3621c-63a4-4728-b7dd-747527f107c0
NAME TYPE SIZE UUID FSTYPE MOUNTPOINT
sda disk 20G
├─sda1 part 33M E470-C4BA vfat /boot/efi
├─sda2 part 3G 764b1b37-c66f-4552-b2b6-0d48196198d7 swap [SWAP]
└─sda3 part 17G efd3621c-63a4-4728-b7dd-747527f107c0 ext4 /
sdb disk 20G
├─sdb1 part 33M E470-C4BA vfat
├─sdb2 part 3G 764b1b37-c66f-4552-b2b6-0d48196198d7 swap
└─sdb3 part1 7G efd3621c-63a4-4728-b7dd-747527f107c0 ext4
sr0 rom 1024M
root@example.com:~# mkdir -p /mnt/sdb2
root@example.com:~# mount /dev/sdb2 /mnt/sdb2
@ -700,7 +784,7 @@ and dangerous)
It is easier in practice to use the bash (or, on Windows, git-bash) to manage keys than PuTTYgen. You generate a key pair with
```bash
ssh-keygen -t ed25519 -f keyfile
ssh-keygen -t ed25519 -f ssh_host_ed25519_key
```
(I don't trust the other key algorithms, because I suspect the NSA has been up to cleverness with the details of the implementation.)
@ -746,18 +830,20 @@ nano /etc/ssh/sshd_config
Your config file should have in it
```default
UsePAM no
HostKey /etc/ssh/ssh_host_ed25519_key
PermitRootLogin prohibit-password
ChallengeResponseAuthentication no
PasswordAuthentication no
PubkeyAuthentication yes
PermitTunnel yes
X11Forwarding yes
PasswordAuthentication no
UsePAM no
ChallengeResponseAuthentication no
AllowAgentForwarding yes
AllowTcpForwarding yes
GatewayPorts yes
X11Forwarding yes
TCPKeepAlive yes
PermitTunnel yes
HostKey /etc/ssh/ssh_host_ed25519_key
ciphers chacha20-poly1305@openssh.com
macs hmac-sha2-256-etm@openssh.com
kexalgorithms curve25519-sha256
@ -816,7 +902,6 @@ only use the ones I have reason to believe are good and securely
implemented. Hence the lines:
```default
HostKey /etc/ssh/ssh_host_ed25519_key
ciphers chacha20-poly1305@openssh.com
macs hmac-sha2-256-etm@openssh.com
kexalgorithms curve25519-sha256
@ -866,64 +951,6 @@ the ssh terminal window.
Once your you can ssh into your cloud server without a password, you now need to update it and secure it with ufw. You also need rsync, to move files around
### Remote graphical access over ssh
```bash
ssh -cX root@reaction.la
```
`c` stands for compression, and `X` for X11.
-X overrides the per host setting in `~/.ssh/config`:
```default
ForwardX11 yes
ForwardX11Trusted yes
```
Which overrides the `host *` setting in `~/.ssh/config`, which overrides the settings for all users in `/etc/ssh/ssh_config`
If ForwardX11 is set to yes, as it should be, you do not need the X. Running a gui app over ssh just works. There is a collection of useless toy
apps, `x11-apps` for test and demonstration purposes.
I never got this working in windows, because no end of mystery
configuration issues, but it works fine on Linux.
Then, as root on the remote machine, you issue a command to start up the
graphical program, which runs as an X11 client on the remote
machine, as a client of the X11 server on your local machine. This is a whole lot easier than setting up VNC.
If your machine is running inside an OracleVM, and you issue the
command `startx` as root on the remote machine to start the remote
machines desktop in the X11 server on your local OracleVM, it instead
seems to start up the desktop in the OracleVM X11 server on your
OracleVM host machine. Whatever, I am confused, but the OracleVM
X11 server on Windows just works for me, and the Windows X11 server
just does not. On Linux, just works.
Everyone uses VNC rather than SSH, but configuring login and security
on VNC is a nightmare. The only usable way to do it is to use turn off all
security on VNC, use `ufw` to shut off outside access to the VNC host's port
and access the VNC host through SSH port forwarding.
X11 results in a vast amount of unnecessary round tripping, with the result
that things get unusable when you are separated from the other compute
by a significant ping time. VNC has less of a ping problem.
X11 is a superior solution if your ping time is a few milliseconds or less.
VNC is a superior solution if your ping time is humanly perceptible, fifty
milliseconds or more. In between, it depends.
I find no solution satisfactory. Graphic software really is not designed to be used remotely. Javascript apps are. If you have a program or
functionality intended for remote use, the gui for that capability has to be
javascript/css/html. Or you design a local client or master that accesses
and displays global host or slave information.
The best solution if you must use graphic software remotely and have a
significant ping time is to use VNC over SSH. Albeit VNC always exports
an entire desktop, while X11 exports a window. Though really, the best solution is to not use graphic software remotely, except for apps.
## Install minimum standard software on the cloud server
```bash
@ -939,6 +966,94 @@ echo "Y
" |ufw enable && ufw status verbose
```
### Remote graphical access
This is done by xrdp and a windowing system. I use Mate
The server should not boot up with the windowing system running
because it mightily slows down boot, sucks up lots of memory,
and because you cannot get at the desktop created at boot through xrdp
-- it runs a different instance of the windowing system.
The server should not be created as a windowing system,
because the default install creates no end of mysterious defaults
differently on a multi user command line system to what it does
in desktop system, which is configured to provide various things
convenient and desirable in a system like a laptop,
but undesirable and inconvenient in a server.
You should create it as a server,
and install the desktop system later through the command line,
over ssh, not through the install system's gui, because the
gui install is going to do mystery stuff behind your back.
Set up the desktop after you have remote access over ssh working
At this point, you should no longer be using the keyboard and screen
you used to install linux, but a remote keyboard and screen.
```bash
apt update && apt upgrade -y
apt install mate-desktop-environment
# on ubuntu apt install ubuntu-mate-desktop
systemctl get-default
systemctl set-default multi-user.target
# on a system that was created as a server,
# set-default graphical-target
# may not work anyway
apt install xrdp -y
systemctl start xrdp
systemctl status xrdp
systemctl stop xrdp
usermod -a -G ssl-cert xrdp
systemctl start xrdp
systemctl status xrdp
systemctl enable xrdp
ufw allow 3389
ufw reload
```
This does not result in, or even allow, booting into
mate desktop, because it does not supply the lightdm, X-windows
and all that. It enables xrdp to run the mate desktop remotely
xrdp has its graphical login manager in place of lightdm, and does
not have anything to display x-windows locally.
If you want the option of locally booting int mate desktop you
also want lightDM and local X11, which is provided by:
```bash
apt update && apt upgrade -y
apt install task-mate-desktop
```
```terminal_image
$ systemctl status xrdp
● xrdp.service - xrdp daemon
Loaded: loaded (/lib/systemd/system/xrdp.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2024-01-06 20:38:07 UTC; 1min 19s ago
Docs: man:xrdp(8)
man:xrdp.ini(5)
Process: 724 ExecStartPre=/bin/sh /usr/share/xrdp/socksetup (code=exited, status=0/S>
Process: 733 ExecStart=/usr/sbin/xrdp $XRDP_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 735 (xrdp)
Tasks: 1 (limit: 2174)
Memory: 1.4M
CPU: 19ms
CGroup: /system.slice/xrdp.service
└─735 /usr/sbin/xrdp
systemd[1]: Starting xrdp daemon...
xrdp[733]: [INFO ] address [0.0.0.0] port [3389] mode 1
xrdp[733]: [INFO ] listening to port 3389 on 0.0.0.0
xrdp[733]: [INFO ] xrdp_listen_pp done
systemd[1]: xrdp.service: Can't open PID file /run/xrdp/xrdp.pid >
systemd[1]: Started xrdp daemon.
xrdp[735]: [INFO ] starting xrdp with pid 735
xrdp[735]: [INFO ] address [0.0.0.0] port [3389] mode 1
xrdp[735]: [INFO ] listening to port 3389 on 0.0.0.0
xrdp[735]: [INFO ] xrdp_listen_pp done
```
## Backing up a cloud server
`rsync` is the openssh utility to synchronize directories locally and
@ -1116,11 +1231,13 @@ All the other files dont matter. The conf file gets you to the named
server. The contents of /var/www/reaction.la are the html files, the
important one being index.html.
[install certbot]:https://certbot.eff.org/instructions
"certbot install instructions" {target="_blank"}
To get free, automatically installed and configured, ssl certificates
and configuration
and configuration [install certbot], then
```bash
apt-get -qy install certbot python-certbot-apache
certbot register --register-unsafely-without-email --agree-tos
certbot --apache
```
@ -1437,12 +1554,15 @@ your domain name is already publicly pointing to your new host, and your
new host is working as desired, without, however, ssl/https that is
great.
To get free, automatically installed and configured, ssl certificates
and configuration [install certbot], then
```bash
# first make sure that your http only website is working as
# expected on your domain name and each subdomain.
# certbots many mysterious, confusing, and frequently
# changing behaviors expect a working environment.
apt-get -qy install certbot python-certbot-nginx
certbot register --register-unsafely-without-email --agree-tos
certbot --nginx
# This also, by default, sets up automatic renewal,
@ -1460,7 +1580,6 @@ server. Meanwhile, for the rest of the world, the domain name continues to
map to the old server, until the new server works.)
```bash
apt-get -qy install certbot python-certbot-nginx
certbot register --register-unsafely-without-email --agree-tos
certbot run -a manual --preferred-challenges dns -i nginx \
-d reaction.la -d blog.reaction.la
@ -1480,10 +1599,7 @@ the big boys can play.
But if you are doing this, not on your test server, but on your live server, the easy way, which will also setup automatic renewal and configure your webserver to be https only, is:
```bash
certbot --nginx -d \
mail.reaction.la,blog.reaction.la,reaction.la,\
www.reaction.la,www.blog.reaction.la,\
gitea.reaction.la,git.reaction.la
certbot --nginx
```
If instead you already have a certificate, because you copied over your
@ -3032,6 +3148,12 @@ ssh and gpg key under profile and settings / ssh gpg keys, and to
prevent the use of https/certificate authority as a backdoor, require
commits to be gpg signed by people listed as collaborators.
Git now supports signing commits with ssh keys, so probably
we should go on the ssh model, rather than the gpg model,
but I have not yet deleted the great pile of stuff concerning gpg
because I have not yet moved to ssh signing, and do not yet know
what awaits if we base Gitea identity on ssh keys.
It can be set to require everything to be ssh signed, thus moving our
identity model from username/password to ssh key. Zooko minus names instead of minus keys
@ -3621,6 +3743,71 @@ flatpack environment, is going to not have the same behaviours. The programmer
has total control over the environment in which his program runs which means
that the end user does not.
# tor
Documenting this here because all the repository based methods
of installing tor that are everywhere documented don't work
and are apt to screw up your system.
## enabling tor services
This is needed by programs that use tor, such as cln (core lightning) but not needed by the tor browser
```bash
install tor
systemctl enable --now tor
nano /etc/tor/torrc:
```
In 'etc/tor/torrc`uncomment or add
```default
ExitPolicy reject *:* # no exits allowed
ControlPort 9051
CookieAuthentication 1
CookieAuthFile /var/lib/tor/control_auth_cookie
CookieAuthFileGroupReadable 1
DirPort 0
ORPort 0
```
ControlPort should be closed, so that only applications running on your computer can get to it.
DirPort and ORPort, if set, should be open -- whereupon you are running as a bridge.
Which you probably do not want, but are good for obfuscation traffic.
Because the cookie file is group readable,
applications running on your computer can read it to control tor through the control port.
It is a good idea to firewall this port so that it is externally closed, so that nothing
outside the computer can control tor.
DirPort and ORPort tell tor to advertise that these ports are open. Don't open or advertise them (set to zero), because then you are running as a bridge.
If you want to run as a bridge to create obfuscation:
```default
DirPort «your external ip address»:9030
ORPort «your external ip address»:9001
```
## installing tor browser
[Torproject on Github](https://torproject.github.io/manual/installation/){target="_blank"} provides information that actually works.
Download the tar file to your home directory, extract it, and execute the command as an ordinary user, no `sudo`, no root, no mucking around with `apt``
```bash
tar -xf tor-browser-linux-x86_64-13.0.8.tar.xz
cd tor-browser
./start-tor-browser.desktop --register-app
```
The next time you do a graphical login, tor will just be there
and will just work, with no fuss or drama. And it will itself
check for updates and nag you for them when needed.
# Censorship resistant internet
## [My planned system](social_networking.html)
@ -3631,6 +3818,10 @@ Private video conferencing
[To set up a Jitsi meet server](andchad.net/jitsi)
## Tox (TokTok)
A video server considerably more freedom oriented than Jitsi
## [Zeronet](https://zeronet.io/docs)
Namecoin plus bittorrent based websites. Allows dynamic content.
@ -3645,15 +3836,66 @@ Non instant text messages. Everyone receives all messages in a stream, but only
Not much work has been done on this project recently, though development and maintenance continues in a desultory fashion.
# Tcl Tk
## Freenet
An absolutely brilliant and ingenious language for producing cross
platform UI. Unfortunately I looked at a website that was kept around for
historical reasons, and concluded that development had stopped twenty years ago.
See [libraries](../libraries.html#freenet)
In fact development continues, and it is very popular, but being absolutely
typeless (everything is a conceptually a string, including running code)
any large program becomes impossible for multiple people to work on.
Best suited for relatively small programs that hastily glue stuff together - it
is, more or less, a better bash, capable of running on any desktop, and
capable of running a gui.
# Network file system
This is most useful when you have a lot of real and
virtual machines on your local network
## Server
```bash
sudo apt update && sudo apt upgrade -qy
sudo apt install -qy nfs-kernel-server nfs-common.
sudo nano /etc/default/nfs-common
```
In the configuration file `nfs-common` change the paramter NEED_STATD to no and NEED_IDMAPD to yes. The NFSv4 required NEED_IDMAPD that will be used as the ID mapping daemon and provides functionality between the server and client.
```terminal_image
NEED_STATD="no"
NEED_IDMAPD="yes"
```
Then to disable nfs3 `sudo nano /etc/default/nfs-kernel-server`
```terminal_image
RPCNFSDOPTS="-N 2 -N 3"
RPCMOUNTDOPTS="--manage-gids -N 2 -N 3"
```
then to export the root of your nfs file system: `sudo nano /etc/exports`
```terminal_image
/nfs 192.168.1.0/24(rw,async,fsid=0,crossmnt,no_subtree_check,no_root_squash)
```
```bash
sudo systemctl restart nfs-server
sudo showmount -e
```
## client
```bash
sudo apt update && sudo apt upgrade -qy
sudo apt install -qy nfs-common
sudo mkdir «mydirectory»
sudo nano /etc/fstab
```
```terminal_image
# <file system> <mount point> <type> <options> <dump> <pass>
«mynfsserver».local:/ «mydirectory» nfs4 _netdev 0 0
```
Where the «funny brackets», as always, indicate mutas mutandis.
```bash
sudo systemctl daemon-reload
sudo mount -a
sudo df -h
```

View File

@ -146,6 +146,7 @@ On the server
sudo mkdir -p /etc/wireguard
wg genkey | sudo tee /etc/wireguard/server_private.key | wg pubkey | sudo tee /etc/wireguard/server_public.key
sudo chmod 600 /etc/wireguard/ -R
sudo chmod 700 /etc/wireguard
```
On the client
@ -154,6 +155,7 @@ On the client
sudo mkdir -p /etc/wireguard
wg genkey | sudo tee /etc/wireguard/private.key | wg pubkey | sudo tee /etc/wireguard/public.key
sudo chmod 600 /etc/wireguard/ -R
sudo chmod 700 /etc/wireguard
```
# Configure Wireguard on server
@ -173,26 +175,26 @@ The curly braces mean that you do not copy the text inside the curly braces, whi
```default
[Interface]
# public key = CHRh92zutofXTapxNRKxYEpxzwKhp3FfwUfRYzmGHR4=
Address = 10.10.10.1/24, 2405:4200:f001:13f6:7ae3:6c54:61ab:0001/112
# public key = «CHRh92zutofXTapxNRKxYEpxzwKhp3FfwUfRYzmGHR4=»
Address = 10.10.10.1/24, «AAAA:AAAA:AAAA:AAAA»:«BBBB:BBBB:BBBB»:0001/112
ListenPort = 115
PrivateKey = iOdkQoqm5oyFgnCbP5+6wMw99PxDb7pTs509BD6+AE8=
PrivateKey = iOdkQoqm5oyFgnCbP5+6wMw99PxDb7pTs509BD6+AE8=»
[Peer]
PublicKey = rtPdw1xDwYjJnDNM2eY2waANgBV4ejhHEwjP/BysljA=
AllowedIPs = 10.10.10.4/32, 2405:4200:f001:13f6:7ae3:6c54:61ab:0009/128
PublicKey = «rtPdw1xDwYjJnDNM2eY2waANgBV4ejhHEwjP/BysljA=»
AllowedIPs = 10.10.10.4/32, «AAAA:AAAA:AAAA:AAAA»:«BBBB:BBBB:BBBB»:0009/128
[Peer]
PublicKey = YvBwFyAeL50uvRq05Lv6MSSEFGlxx+L6VlgZoWA/Ulo=
AllowedIPs = 10.10.10.8/32, 2405:4200:f001:13f6:7ae3:6c54:61ab:0019/128
PublicKey = «YvBwFyAeL50uvRq05Lv6MSSEFGlxx+L6VlgZoWA/Ulo=»
AllowedIPs = 10.10.10.8/32, «AAAA:AAAA:AAAA:AAAA»:«BBBB:BBBB:BBBB»:0019/128
[Peer]
PublicKey = XpT68TnsSMFoZ3vy/fVvayvrQjTRQ3mrM7dmyjoWJgw=
AllowedIPs = 10.10.10.12/32, 2405:4200:f001:13f6:7ae3:6c54:61ab:0029/128
PublicKey = «XpT68TnsSMFoZ3vy/fVvayvrQjTRQ3mrM7dmyjoWJgw=»
AllowedIPs = 10.10.10.12/32, «AAAA:AAAA:AAAA:AAAA»:«BBBB:BBBB:BBBB»:0029/128
[Peer]
PublicKey = f2m6KRH+GWAcCuPk/TChzD01fAr9fHFpOMbAcyo3t2U=
AllowedIPs = 10.10.10.16/32, 2405:4200:f001:13f6:7ae3:6c54:61ab:0039/128
PublicKey = «f2m6KRH+GWAcCuPk/TChzD01fAr9fHFpOMbAcyo3t2U=»
AllowedIPs = 10.10.10.16/32, «AAAA:AAAA:AAAA:AAAA»:«BBBB:BBBB:BBBB»:0039/128
```
```default
@ -212,6 +214,16 @@ which ought to be changed". In other words, watch out for those «...» .
Or, as those that want to baffle you would say, metasyntactic variables are enclosed in «...» .
In the above example «AAAA:AAAA:AAAA:AAAA» is the 64 bits of the IPv6
address range of your host and «BBBB:BBBB:BBBB» is a random 48 bit subnet
that you invented for your clients.
This should be a random forty eight bit number to avoid collisions,
because who knows what other subnets have been reserved.
This example supports IPv6 as well as IPv4, but getting IPv6 working
is likely to be hard so initially forget about IPv6, and just stick to IPv4 addresses.
Where:
@ -225,8 +237,27 @@ Change the file permission mode so that only root user can read the files. Priv
```bash
sudo chmod 600 /etc/wireguard/ -R
sudo chmod 700 /etc/wireguard
```
## IPv6
This just does not work on many hosts, depending on arcane
incomprehensible and always different and inaccessible
aspects of their networking setup. But when it works, it works.
For IP6 to work, without network address translation, you just
give each client a subrange of the host IPv6 address
(which you may not know, or could be changed underneath you)
When it works, no network address translation needed.
When IPv6 network address translation is needed,
you probably will not be able to get it working anyway,
because if it is needed,
it is needed because the host network is doing something
too clever by half with IPv6, and you don't know what they are doing,
and they probably do not know either.
## Configure IP Masquerading on the Server
We need to set up IP masquerading in the server firewall, so that the server becomes a virtual router for VPN clients. I will use UFW, which is a front end to the iptables firewall. Install UFW on Debian with:
@ -313,7 +344,7 @@ The above lines will append `-A` a rule to the end of the`POSTROUTING` chain of
Like your home router, it means your client system behind the nat has no open ports.
If you want to open some ports, for example the bitcoin port 8333 so that you can run bitcoin core and the monaro ports.
If you want to open some ports, for example the bitcoin port 8333 so that you can run bitcoin core and the Monero ports.
```terminal_image
NAT table rules
@ -352,21 +383,27 @@ ufw route allow in on wg0
ufw route allow out on wg0
ufw allow in on wg0
ufw allow in from 10.10.10.0/24
ufw allow in from 2405:4200:f001:13f6:7ae3:6c54:61ab:0001/112
ufw allow in from «AAAA:AAAA:AAAA:AAAA»:«BBBB:BBBB:BBBB:0001»/112
ufw allow «51820»/udp
ufw allow to 10.10.10.1/24
ufw allow to 2405:4200:f001:13f6:7ae3:6c54:61ab:0001/112
# Danger Will Robertson
ufw allow to «AAAA:AAAA:AAAA:AAAA»:«BBBB:BBBB:BBBB»:0001/112
# This las last line ileaves your clients naked on the IPv6
# global internet with their own IPv6 addresses
# as if they were in the cloud with no firewall.
```
As always «...» means that this is an example value, and you need to substitute your actual value. "_Mutas mutandis_" means "changing that which should be changed", in other words, watch out for those «...» .
Note that the last line is intended to leave your clients naked on the IPv6
global internet with their own IPv6 addresses, as if they were in the cloud
with no firewall. This is often desirable for linux systems, but dangerous
with no firewall.This is often desirable for linux systems, but dangerous
for windows, android, and mac systems which always have loads of
undocumented closed source mystery meat processes running that do who
knows what.
It would be safer to only allow in specific ports.
You could open only part of the IPv6 subnet to incoming, and put
windows, mac, and android clients in the part that is not open.
@ -484,7 +521,6 @@ And add allow recursion for your subnets.
After which it should look something like this:
```terminal_image
:~# cat /etc/bind/named.conf.options | tail -n 9
acl bogusnets {
@ -497,7 +533,7 @@ acl my_net {
::1;
116.251.216.176;
10.10.10.0/24;
2405:4200:f001:13f6::/64;
«AAAA:AAAA:AAAA:AAAA»::/64;
};
options {
@ -605,13 +641,13 @@ for example, and has to be customized. Mutas mutandis. Metasyntactic variables
```default
[Interface]
Address = 10.10.10.2/24
Address =10.10.10.4/32, «AAAA:AAAA:AAAA:AAAA»:«BBBB:BBBB:BBBB»:0009/128
DNS = 10.10.10.1
PrivateKey = «cOFA+x5UvHF+a3xJ6enLatG+DoE3I5PhMgKrMKkUyXI=»
[Peer]
PublicKey = «kQvxOJI5Km4S1c7WXu2UZFpB8mHGuf3Gz8mmgTIF2U0=»
AllowedIPs = 0.0.0.0/0
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = «123.45.67.89:51820»
PersistentKeepalive = 25
```
@ -622,7 +658,7 @@ Where:
- `DNS`: specify 10.10.10.1 (the VPN server) as the DNS server. It will be configured via the `resolvconf` command. You can also specify multiple DNS servers for redundancy like this: `DNS = 10.10.10.1 8.8.8.8`
- `PrivateKey`: The clients private key, which can be found in the `/etc/wireguard/private.key` file on the client computer.
- `PublicKey`: The servers public key, which can be found in the `/etc/wireguard/server_public.key` file on the server.
- `AllowedIPs`: 0.0.0.0/0 represents the whole Internet, which means all traffic to the Internet should be routed via the VPN.
- `AllowedIPs`: 0.0.0.0/0 represents the whole IPv4 Internet, which means all IPv4 traffic to the Internet should be routed via the VPN. ::/0 represents the whole IPv6 Internet. If you specify one but not the other, and your client has both IPv4 and IPv6 capability, only half your traffic will go through the vpn. If your client has both capabilities, but your vpn does not, this is bad, but things still work.
- `Endpoint`: The public IP address and port number of VPN server. Replace 123.45.67.89 with your servers real public IP address and the port number with your servers real port number.
- `PersistentKeepalive`: Send an authenticated empty packet to the peer every 25 seconds to keep the connection alive. If PersistentKeepalive isnt enabled, the VPN server might not be able to ping the VPN client.
@ -632,10 +668,23 @@ Change the file mode so that only root user can read the files.
```bash
chmod 600 /etc/wireguard/ -R
chmod 700 /etc/wireguard
```
Start WireGuard.
```bash
wg-quick up /etc/wireguard/wg-client0.conf
```
To stop it, run
```bash
wg-quick down /etc/wireguard/wg-client0.conf
```
You can also use systemd service to start WireGuard.
```bash
systemctl start wg-quick@wg-client0.service
```
@ -652,6 +701,22 @@ Check its status:
systemctl status wg-quick@wg-client0.service
```
The status should look something like this:
```terminal_image
# systemctl status wg-quick@wg-client0.service
wg-quick@wg-client0.service - WireGuard via wg-quick(8) for wg/client0
Loaded: loaded (/lib/systemd/system/wg-quick@.service; enabled; preset: enabled)
Active: inactive (dead)
Docs: man:wg-quick(8)
man:wg(8)
https://www.wireguard.com/
https://www.wireguard.com/quickstart/
https://git.zx2c4.com/wireguard-tools/about/src/man/wg-quick.8
https://git.zx2c4.com/wireguard-tools/about/src/man/wg.8
```
Now go to this website: `http://icanhazip.com/` to check your public IP address. If everything went well, it should display your VPN servers public IP address instead of your client computers public IP address.
You can also run the following command to get the current public IP address.

View File

@ -1,301 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<style>
body {
max-width: 30em;
margin-left: 2em;
}
p.center {
text-align:center;
}
</style>
<link rel="shortcut icon" href="../rho.ico">
<title>Wallet Implementation</title>
</head>
<body>
<p><a href="./index.html"> To Home page</a> </p>
<h1>Wallet Implementation</h1><p>
The primary function of the wallet file is to provide a secret key for
a public key, though eventually we stuff all sorts of user odds and ends
into it.</p><p>
In Bitcoin, this is simply a pile of random secrets, randomly generated, but obviously it is frequently useful to have them not random, but merely seemingly random to outsiders. One important and valuable application of this is the paper wallet, where one can recreate the wallet from its master secret, because all the seemingly random secret keys are derived from a single master secret. But this does not cover all use cases.</p><p>
We care very much about the case where a big important peer in a big central
node of the internet, and the owner skips out with the paper key that owns the
peers reputation in his pocket, and sets up the same peer in another
jurisdiction, and everyone else talks to the peer in the new jurisdiction,
and ignores everything the government seized.</p><p>
The primary design principle of our wallet is to bypass the DNS and ICANN, so
that people can arrange transactions securely. The big security hole in
Bitcoin is not the one that Monaro fixes, but that you generally arrange
transactions on the totally insecure internet. The spies care about the
metadata more than they care about the data. We need to internalize and secure
the metadata.</p><p>
We particularly want peers to be controlled by easily hidden and transported
secrets. The key design point around which all else is arranged is that if
some man owns a big peer on the blockchain, and the cops descend upon it, he
grabs a book of his bookshelf that looks like any airport book, except on
pencil in the margins of one page is his master secret, written in pencil,
and a list of all his nicknames, also written in pencil. Then he skips off
to the airport and sets up his peer on a new host, and no one notices that it
has moved except for the cops who grabbed his now useless hardware. (Because
the secrets on that hardware were only valuable because their public keys
were signed by his keys, and when he starts up the new hardware, his keys
will sign some new keys on the peer and on the slave wallet on that hardware.)
</p><p>
We also want perfect forward secrecy, which is easy. Each new connection
initiation starts with a random transient public key, and any identifying
information is encrypted in the shared secret formed from that transient public
key, and the durable public key of the recipient.</p><p>
We also want off the record messaging. So we want to prove to the recipient
that the sender knows the shared secret formed from the recipients public key
and the senders transient private key, and also the shared secret formed from
the recipients public key, the senders transient private key, and the
senders durable private key. But the recipient, though he then knows the
message came from someone who has the senders durable private key, cannot
prove that to a third party, because he could have constructed that shared
secret, even if the sender did not know it. The recipient could have forged
the message and sent it to himself.</p><p>
Thus we get off the record messaging. The sender by proving knowledge of two
shared secrets, proves to the recipient knowledge of two secret keys
corresponding to the two public keys provided, but though the sender proves
it to the recipient, the recipient cannot prove it to anyone else.</p><p>
The message consists of transient public key in the clear, which encrypts
durable public key. then durable public key plus transient public key encrypts
the rest of the message, which encryption proves knowledge of the secret key
underlying the durable public key, proves it to the recipient, but he cannot
prove it to anyone else.</p><p>
The durable public key may be followed by the schema identifier 2, which
implies the nickname follows, which implies the twofitytwo bit hash of the
public key followed by nickname, which is the global identifier of the
pseudonym sending the message. But we could have a different schema, 3,
corresponding to a chain of signatures authenticating that public key, subject
to timeouts and indications of authority, which we will use in slave wallets
and identifiers that correspond to a corporate role in a large organization,
or in iff applications where new keys are frequently issued.</p><p>
If we have a chain of signatures, the hash is of the root public key and the
data being signed (names, roles, times, and authorities) but not the signatures
themselves, nor the public keys that are being connected to that data.</p><p>
When someone initiates a connection using such a complex identity, he sends
proof of shared knowledge of two shared secrets, and possibly a chain of
signatures, but does not sign anything with the transient public key, nor the
durable public key at the end of the chain of signatures, unless, as with a
review, he wants the message to be shared around, in which case he signs the
portion of the message that is to be shared around with that public key, but
not the part of the message that is private between sender and receiver.</p><p>
To identify the globally unique twofitytwo bit id, the recipient hashes the
public key and the identifier that follows. Or he may already know, because
the sender has a client relationship with the recipient, that the public key
is associated with a given nickname and globally unique identifier.</p><p>
If we send the schema identifier 0, we are not sending id information, either
because the recipient already has it, or because we dont care, or because we
are only using this durable public key with this server and do not want a
global identity that can be shared between different recipients.</p><p>
So, each master wallet will contain a strong human readable and human writeable
master secret, and the private key of each nickname will be generated by
hashing the nickname with the master secret, so that when, in some new
location, he types in the code on fresh hardware and blank software, he will
get the nicknames public and private keys unchanged, even if he has to buy or
rent all fresh hardware, and is not carrying so much as a thumbdrive with him.
</p><p>
People with high value secrets will likely use slave wallets, which perform
functions analogous to a POP or IMAP server, with high value secrets on their
master wallet, which chats to their slave wallet, which does not have high
value secrets. It has a signed message from the master wallet authorizing it
to receive value in the master wallets name, and another separately signed
message containing the obfuscation shared secret, which is based on the
master wallet secret and an,the first integer being the number of the
transaction with the largest transaction in that name known to the master
wallet, an integer that the slave wallet increments with every request for
value. The master wallet needs to provide a name for the slave walle, and to
recover payments, needs that name. The payments are made to a public key of
the master wallet, multiplied by a pseudo random scalar constructed from the
hash of a sequential integer with an obfuscation secret supplied by the master
wallet, so that the master wallet can recover the payments without further
communication, and so that the person making the payment knows that any
payment can only be respent by someone who has the master wallet secret key,
which may itself be the key at the end of chain of signatures identity,
which the master wallet posesses, but the slave wallet does not.</p><p>
For slave wallets to exist, the globally unique id has to be not a public
key, but rather the twofiftytwo bit hash of a rule identifying the public key.
The simplest case being hash(2|public key|name), which is the identifier used
by a master wallet. A slave wallet would use the identifier hash(3|master
public key|chain of signed ids) with the signatures and public keys in the
chain omitted from the hash, so the durable public key and master secret of
the slave wallet does not need to be very durable at all. It can, and
probably should, have a short timeout and be frequently updated by messages
from the master wallet.</p><p>
The intent is that a slave wallet can arrange payments on behalf of an identity
whose secret it does not possess, and the payer can prove that he made the
payment to that identity. So if the government grabs the slave wallet, which
is located in a big internet data center, thus eminently grabbable, it grabs
nothing, not reputation, not an identity, and not money. The slave wallet has
the short lived and regularly changed secret for the public key of an
unchanging identity identity authorized to make offers on behalf a long lived
identity, but not the secret for an identity authorized to receive money on
behalf of a long lived identity. The computer readable name of these
identities is a twofiftytwo bit hash, and the human readable name is something
like receivables@_@globally_unique_human_readable_name, or
sales@_@globally_unique_human_readable_name. The master secret for
_@globally_unique_human_readable_name is closely held, and the master secret
for globally_unique_human_readable_name written in pencil on a closely held
piece of paper and is seldom in any computer at all.</p><p>
For a name that is not globally unique, the human readable name is
non_unique_human_readable_nickname zero_or_more_whitespace
42_characters_of_slash6_code zero_or_more_whitespace. </p><p>
Supposing Carol wants to talk to Dave, and the base point is B. Carols secret is the scalar <code>c</code>, and her public key is elliptic point <code>C=c*B</code>. Similarly Daves secret is the scalar <code>d</code>, and his public key is elliptic point
<code>D=d*B</code>.</p><p>
His secret scalar <code>d</code> is probably derived from the hash of his master secret with his nickname, which we presume is "Dave".</p><p>
They could establish a shared secret by Carol calculating <code>c*D</code>, and
Dave calculating <code>d*C</code>. But if either partys key is seized, their
past communications become decryptable. Plus this unnecessarily leaks metadata
o people watching the interaction.</p><p>
Better, Carol generates a random scalar <code>r</code>, unpredictable to anyone
else, calculates <code>R=r*B</code>, and sends Dave <code>R, C</code> encrypted
using <code>r*D,</code> and the rest of the message encrypted using <code>
(r+c)*D</code>.</p><p>
This gives us perfect forward secrecy and off-the-record.</p><p>
One flaw in this is that if Daves secret leaks, not only can he and the people communicating with him be spied upon, but he can receive falsified messages that appear to come from trusted associates.</p><p>
One fix for this is the following protocol. When Carol initiates communication
with Dave, she encrypts only with the transient public and private keys, but
when Dave replies, he encrypts with another transient key. If Carol can
receive his reply, it really is Carol. This does not work with one way
messages, mail like messages, but every message that is a reply should
automatically contain a reference to the message that it replies to, and every
message should return an ack, so one way communication is a limited case.</p><p>
Which case can be eliminated by handling one way messages as followed up by
mutual acks. For TCP, the minimum set up and tear down of a TCP connection
needs three messages to set up a connection, and three messages to shut it
down, so if we could multiplex encryption setup into communication setup, which
we should, we could bullet proof authentication. To leak the least metadata,
want to establish keys as early in the process as we can while avoiding the
possibility of DNS attacks against the connection setup process. If
communication only ensues when both sides know four shared secrets, then the
communication can only be forged or intercepted by a party that knows both
sides durable secrets.</p><p>
We assume on public key per network address and port that when she got the port
associated with the public key, she learned what which of his possibly numerous
public keys will be listening on that port.</p><p>
Carol calculates <code>(c+r)*D</code>. Dave calculates <code>d*(C+R)</code>, to arrive at a shared secret, used only once.</p><p>
That is assuming Carol needs to identify. If this is just a random anonymous client connection, in which Dave responds the same way to every client, all she needs to send is <code>R</code> and <code>D</code>. She does not need forward secrecy, since not re-using <code>R</code>. But chances are Dave does not like this, since chances are he wants to generate a record of the distinct entities applying, so a better solution is for Carol to use as her public key when talking to Dave <code>hash(c, D)*B</code></p><p>
Sometimes we are going to onion routing, where the general form of an onion routed message is [<code>R, D,</code> encrypted] with the encryption key being <code>d*R = r*D</code>. And inside that encryption may be another message of the same form, another layer of the onion.</p><p>
But this means that Dave cannot contact Carol. But maybe Carol does not want to be contacted, in which case we need a flag to indicate that this is an uncontactable public key. Conversely, if she does want to be contactable, but her network address changes erratically, she may need to include contact info, a stable server that holds her most recent network address.</p><p>
Likely all she needs is that Dave knows that she is the same entity as logged in with Dave before. (Perhaps because all he cares about is that this is the same entity that paid Dave for services to be rendered, whom he expects to keep on paying him for services he continues to render.) </p><p>
The common practical situation, of course is that Carol@Name wants to talk to Dave@AnotherName, and Name knows the address of AnotherName, and AnotherName knows the name of Dave, and Carol wants to talk to Dave as Carol@Name. To make this secure, have to have have a a commitment to the keys of Carol@Name and Dave@AnotherName, so that Dave and Carol can see that Name and AnotherName are showing the same keys to everyone.</p><p>
The situation we would like to have is a web of public documents rendered immutable by being linked into the blockchain, identifying keys, and a method of contacting the entity holding the key, the method of contacting the identity holding the key being widely available public data.</p><p>
In this case, Carol may well have a public key only for use with Name, and Dave a public key only for use with AnotherName.
If using <code>d*C</code> as the identifier, inadvisable to use a widely known <code>C</code>, because this opens attacks via <code>d</code>, so <code>C</code> should be a per host public key that Carol keeps secret.</p><p>
The trouble with all these conversations is that they leak metadata it is visible to third parties that <code>C</code> is talking to <code>D</code>.</p><p>
Suppose it is obvious that you are talking to Dave, because you had to look up Daves network address, but you do not want third parties to know that <em>Carol</em> is talking to Dave. Assume that Daves public key is the only public key associated with this network address.</p><p>
We would rather not make it too easy for the authorities to see the public key of the entity you are contacting, so would like to have D end to end encrypted in the message. You are probably contacting the target through a rendevous server, so you contact the server on its encrypted key, and it sets up the rendevous talking to the wallet you want to contact on its encrypted key, in which case the messages you send in the clear do not need, and should not have, the public key of the target.</p><p>
Carol sends <code>R</code> to Dave in the clear, and encrypts <code>C</code> and <code>r</code>*<code>D</code>, using the shared secret key <code>r</code>*<code>D</code> = <code>d</code>*<code>R</code></p><p>
Subsequent communications take place on the shared secret key <code>(c+r)*D = d*(C+R)</code></p><p>
Suppose there are potentially many public keys associated with this network address, as is likely to be the case if it is a slave wallet performing Pop and Imap like functions. Then it has one primary public key for initiating communications. Call its low value primary keys p and P. Then Carol sends <code>R</code> in the clear, followed by <code>D</code> encrypted to <code>r*P = p*R</code>, followed by <code>C</code> and <code>r*D</code>, encrypted to <code>r*D = d*R</code>.</p><p>
We can do this recursively, and Dave can return another, possibly transient, public key or public key chain that works like a network address. This implies a messaging system whose api is that you send an arbitrary length message with a public key as address, and possibly a public key as authority, and an event identifier, and then eventually you get an event call back, which may indicate merely that the message has gone through, or not, and may contain a reply message. The messaging system handles the path and the cached shared secrets. All shared secrets are in user space, and messages with different source authentication have different connections and different shared secrets.</p><p>
Bitcoin just generates a key at random when you need one, and records it. A paper wallet is apt to generate a stack of them in advance. They offer the option of encrypting the keys with a lesser secret, but the master secret has to be stronger, because it has to be strong enough to resist attacks on the public keys.</p><p>
On reflection, there are situations where this is difficult we would like the customer, rather than the recipients, to generate the obfuscation keys, and pay the money to a public key that consists of the recipients deeply held secret, and a not very deeply held obfuscation shared secret.</p><p>
This capability is redundant, because we also plan to attach to the transaction a Merkle hash that is the root of tree of Merkle hashes indexed by output, in Merkle-patricia order, each of which may be a random number or may identify arbitrary information, which could include any signed statements as to what the obligations the recipient of the payment has agreed to, plus information identifying the recipient of the payment, in that the signing key of the output is s*B+recipient_public key. But it would be nice to make s derived from the hash of the recipients offer, since this proves that when he spends the output, he spends value connected to obligations.</p><p>
Such receipts are likely generated by a slave wallet, which may well do things differently to a master wallet. So let us just think about the master wallet at this point. We want the minimum capability to do only those things a master wallet can do, while still being future compatible with all the other things we want to do.</p><p>
When we generate a public key and use it, want to generate records of to how it was generated, why, and how it was used. But this is additional tables, and an identifier in the table of public keys saying which additional table to look at.</p><p>
One meaning per public key. If we use the public key of a name for many purposes, use it as a name. We dont use it as a transaction key except for transactions selling, buying, or assigning that name. But we could have several possible generation methods for a single meaning.</p><p>
And, likely, the generation method depends on the meaning. So, in the table of public keys we have a single small integer telling us the kind of thing the public key is used for, and if you look up that table indicated by the integer, maybe all the keys in that table are generated by one method, or maybe several methods, and we have a single small integer identifying the method, and we look up the key in that table which describes how to generate the key by that method.</p><p>
Initially we have only use. Names. And the table for names can have only one method, hence does not need an index identifying the method.</p><p>
So, a table of public key private key pairs, with a small integer identifying the use and whether the private key has been zeroed out, and a table of names, with the rowid of the public key. Later we introduce more uses, more possible values for the use key, more tables for the uses, and more tables for the generation methods.</p><p>
A wallet is the data of a Zooko identity. When logged onto the interent is <em>is</em> a Zooko triangle identity or Zookos quandrangle identity, but is capable of logging onto the internet as a single short user identity term, or logging on with one peer identity per host. When that wallet is logged in, it is Zooko identity with its master secret and an unlimited pile of secrets and keys generated on demand from a single master secret.</p><p>
We will want to implement the capability to receive value into a key that is not currently available. So how are we going to handle the identity that the wallet will present?</p><p>
Well, keys that have a time limited signature that they can be used for certain purposes in the name of another key are a different usage, that will have other tables, that we can add later.</p><p>
A receive only wallet, when engaging in a transaction, identifies itself as Zooko identity for which it does not have the master key, merely a signed authorization allowing it to operate for that public key for some limited lifetime, and when receiving value, requests that value be placed in public keys that it does not currently possess, in the form of a scalar and the public key that will receive the value. The public key of the transaction output will be s*B+<code>D</code>, where s is the obfuscating value that hides the beneficiery from the public blockchain, B is the base, and <code>D</code> is the beneficiary. B is public, s*B+<code>D</code> is public, but s and <code>D</code> are known only to the parties to the transaction, unless they make them public.</p><p>
And that is another usage, and another rule for generating secrets, which we can get around to another day.</p><p>
The implementation can and will vary from one peer to the next. The canonical form is going to be a Merkle-patricia dac, but the Merkle-patricia dac has to be implemented on top of something, and our first implementation is going to implement the Merkle-patricia dac on top of a particular sqlite database with a particular name, and that name stored in a location global to all programs of a particular user on a particular machine, and thus that database global to all programs of that particular user on that particular machine.</p><p>
Then we will implement a global consensus Merkle-patricia dac, so that everyone agrees on the one true mapping between human readable names, but the global consensus dac has to exist on top of particular implementations run by particular users on particular machines, whose implementation may well vary from one machine to the next.</p>
<p>A patricia tree contains, in itself, just a pile of bitstreams. To represent actual information, there has to be a schema. It has to be a representation of something equivalent to a pile of records in a pile of database tables. And so, before we represent data, before we create the patricia tree, have to first have a particular working database, a particular implementation, which represents actual data. Particular database first, then we construct universal canonical representation of that data second.</p>
<p>In every release version, starting with the very first release, we are going to have to install a database, if one is not already present, in order that the user can communicate with other users, so we will have no automagic creation of the database. I will manually create and install the first database, and will have a dump file for doing so, assuming that the dump file can create blobs.</p>
<p>A receive only wallet contains the public key and zooko name of its master wallet, optionally a thirty two byte secret, a numbered signed record by the master wallet authorizing it to use that name and receive value on the master wallet secret and its own public key.</p>
<p>A zooko quandrangle identifier consists of a public master key, a globally accepted human readable name globally accepted as identifying that key, a local human readable petname, normally identical to the global name, and an owner selected human readable nickname.</p>
<p>A contact consists of a public key, and network address information where you will likely find a person or program that knows the corresponding private key, or is authorized by that private key to deal with communications.</p>
<p>So, first thing we have to do is create a wxWidgets program that accesses the database, and have it create Zookos triangle identities, with the potential for becoming Zookos quandrangle identities.</p>
<h2>Proving knowledge of the secret key of a public key</h2><p>
Our protocol, unlike bitcoin, has proof of transaction. The proof is private, but can be made public, to support ebay type reputations.</p>
<p>Supposing you are sending value to someone, who has a big important reputation that he would not want damaged. And you want proof he got the value, and that he agreed to goods, services, or payment of some other form of money in return for the value.</p>
<p>Well, if he has a big important reputation, chances are you are not transacting with him personally and individually through a computer located behind a locked door in his basement, to which only he has the key, plus he has a shotgun on the wall near where he keeps the key. Rather, you are transacting with him through a computer somwhere in the cloud, in a big computing center to which far too many people have access, including the computer center management, police, the Russian mafia, the Russian spy agency, and the man who mops the floors.</p>
<p>So, when "he", which is to say the computer in the cloud, sends you public keys for you to put value into on the blockchain, you want to be sure that only he can control value you put into the blockchain. And you want to be able to prove it, but you do not want anyone else except you and he (and maybe everyone who has access to the data center on the cloud) can prove it except you make the data about your transaction public.</p>
<p>You are interacting with a program. And the program probably only has low value keys. It has a key signed by a key signed by a key signed by his high value and closely held key, with start times and timeouts on all of the intermediate keys.</p><p>
So he should have a key that is not located in the datacenter, and you want proof that only the holder of that key can spend the money maybe an intermediate key with a timeout on it that authorizes it to receive money, which signs an end key that authorizes the program to agree to deals, but not to receive money the intermediate key authorized to receive money presumably not being in the highly insecure data center.</p>
<p style="background-color : #ccffcc; font-size:80%">This document is licensed under the <a rel="license" href="http://creativecommons.org/licenses/by-sa/3.0/">CreativeCommons Attribution-Share Alike 3.0 License</a></p>
</body>
</html>

View File

@ -0,0 +1,280 @@
---
lang: en
title: Wallet Implementation
---
The primary function of the wallet file is to provide a secret key for
a public key, though eventually we stuff all sorts of user odds and ends
into it.
In Bitcoin, this is simply a pile of random secrets, randomly generated, but obviously it is frequently useful to have them not random, but merely seemingly random to outsiders. One important and valuable application of this is the paper wallet, where one can recreate the wallet from its master secret, because all the seemingly random secret keys are derived from a single master secret. But this does not cover all use cases.
We care very much about the case where a big important peer in a big central
node of the internet, and the owner skips out with the paper key that owns the
peers reputation in his pocket, and sets up the same peer in another
jurisdiction, and everyone else talks to the peer in the new jurisdiction,
and ignores everything the government seized.
The primary design principle of our wallet is to bypass the DNS and ICANN, so
that people can arrange transactions securely. The big security hole in
Bitcoin is not the one that Monero fixes, but that you generally arrange
transactions on the totally insecure internet. The spies care about the
metadata more than they care about the data. We need to internalize and secure
the metadata.
We particularly want peers to be controlled by easily hidden and transported
secrets. The key design point around which all else is arranged is that if
some man owns a big peer on the blockchain, and the cops descend upon it, he
grabs a book of his bookshelf that looks like any airport book, except on
pencil in the margins of one page is his master secret, written in pencil,
and a list of all his nicknames, also written in pencil. Then he skips off
to the airport and sets up his peer on a new host, and no one notices that it
has moved except for the cops who grabbed his now useless hardware. (Because
the secrets on that hardware were only valuable because their public keys
were signed by his keys, and when he starts up the new hardware, his keys
will sign some new keys on the peer and on the slave wallet on that hardware.)
We also want perfect forward secrecy, which is easy. Each new connection
initiation starts with a random transient public key, and any identifying
information is encrypted in the shared secret formed from that transient public
key, and the durable public key of the recipient.
We also want off the record messaging. So we want to prove to the recipient
that the sender knows the shared secret formed from the recipients public key
and the senders transient private key, and also the shared secret formed from
the recipients public key, the senders transient private key, and the
senders durable private key. But the recipient, though he then knows the
message came from someone who has the senders durable private key, cannot
prove that to a third party, because he could have constructed that shared
secret, even if the sender did not know it. The recipient could have forged
the message and sent it to himself.
Thus we get off the record messaging. The sender by proving knowledge of two
shared secrets, proves to the recipient knowledge of two secret keys
corresponding to the two public keys provided, but though the sender proves
it to the recipient, the recipient cannot prove it to anyone else.
The message consists of transient public key in the clear, which encrypts
durable public key. then durable public key plus transient public key encrypts
the rest of the message, which encryption proves knowledge of the secret key
underlying the durable public key, proves it to the recipient, but he cannot
prove it to anyone else.
The durable public key may be followed by the schema identifier 2, which
implies the nickname follows, which implies the twofitytwo bit hash of the
public key followed by nickname, which is the global identifier of the
pseudonym sending the message. But we could have a different schema, 3,
corresponding to a chain of signatures authenticating that public key, subject
to timeouts and indications of authority, which we will use in slave wallets
and identifiers that correspond to a corporate role in a large organization,
or in iff applications where new keys are frequently issued.
If we have a chain of signatures, the hash is of the root public key and the
data being signed (names, roles, times, and authorities) but not the signatures
themselves, nor the public keys that are being connected to that data.
When someone initiates a connection using such a complex identity, he sends
proof of shared knowledge of two shared secrets, and possibly a chain of
signatures, but does not sign anything with the transient public key, nor the
durable public key at the end of the chain of signatures, unless, as with a
review, he wants the message to be shared around, in which case he signs the
portion of the message that is to be shared around with that public key, but
not the part of the message that is private between sender and receiver.
To identify the globally unique twofitytwo bit id, the recipient hashes the
public key and the identifier that follows. Or he may already know, because
the sender has a client relationship with the recipient, that the public key
is associated with a given nickname and globally unique identifier.
If we send the schema identifier 0, we are not sending id information, either
because the recipient already has it, or because we dont care, or because we
are only using this durable public key with this server and do not want a
global identity that can be shared between different recipients.
So, each master wallet will contain a strong human readable and human writeable
master secret, and the private key of each nickname will be generated by
hashing the nickname with the master secret, so that when, in some new
location, he types in the code on fresh hardware and blank software, he will
get the nicknames public and private keys unchanged, even if he has to buy or
rent all fresh hardware, and is not carrying so much as a thumbdrive with him.
People with high value secrets will likely use slave wallets, which perform
functions analogous to a POP or IMAP server, with high value secrets on their
master wallet, which chats to their slave wallet, which does not have high
value secrets. It has a signed message from the master wallet authorizing it
to receive value in the master wallets name, and another separately signed
message containing the obfuscation shared secret, which is based on the
master wallet secret and an,the first integer being the number of the
transaction with the largest transaction in that name known to the master
wallet, an integer that the slave wallet increments with every request for
value. The master wallet needs to provide a name for the slave wallet, and to
recover payments, needs that name. The payments are made to a public key of
the master wallet, multiplied by a pseudo random scalar constructed from the
hash of a sequential integer with an obfuscation secret supplied by the master
wallet, so that the master wallet can recover the payments without further
communication, and so that the person making the payment knows that any
payment can only be respent by someone who has the master wallet secret key,
which may itself be the key at the end of chain of signatures identity,
which the master wallet posesses, but the slave wallet does not.
For slave wallets to exist, the globally unique id has to be not a public
key, but rather the twofiftytwo bit hash of a rule identifying the public key.
The simplest case being hash(2\|public key\|name), which is the identifier used
by a master wallet. A slave wallet would use the identifier hash(3\|master
public key\|chain of signed ids) with the signatures and public keys in the
chain omitted from the hash, so the durable public key and master secret of
the slave wallet does not need to be very durable at all. It can, and
probably should, have a short timeout and be frequently updated by messages
from the master wallet.
The intent is that a slave wallet can arrange payments on behalf of an identity
whose secret it does not possess, and the payer can prove that he made the
payment to that identity. So if the government grabs the slave wallet, which
is located in a big internet data center, thus eminently grabbable, it grabs
nothing, not reputation, not an identity, and not money. The slave wallet has
the short lived and regularly changed secret for the public key of an
unchanging identity identity authorized to make offers on behalf a long lived
identity, but not the secret for an identity authorized to receive money on
behalf of a long lived identity. The computer readable name of these
identities is a twofiftytwo bit hash, and the human readable name is something
like receivables@\_@globally_unique_human_readable_name, or
sales@\_@globally_unique_human_readable_name. The master secret for
\_@globally_unique_human_readable_name is closely held, and the master secret
for globally_unique_human_readable_name written in pencil on a closely held
piece of paper and is seldom in any computer at all.
For a name that is not globally unique, the human readable name is
non_unique_human_readable_nickname zero_or_more_whitespace
42_characters_of_slash6_code zero_or_more_whitespace.
Supposing Carol wants to talk to Dave, and the base point is B. Carols secret is the scalar `c`, and her public key is elliptic point `C=c*B`. Similarly Daves secret is the scalar `d`, and his public key is elliptic point
`D=d*B`.
His secret scalar `d` is probably derived from the hash of his master secret with his nickname, which we presume is "Dave".
They could establish a shared secret by Carol calculating `c*D`, and
Dave calculating `d*C`. But if either partys key is seized, their
past communications become decryptable. Plus this unnecessarily leaks metadata
o people watching the interaction.
Better, Carol generates a random scalar `r`, unpredictable to anyone
else, calculates `R=r*B`, and sends Dave `R, C` encrypted
using `r*D,` and the rest of the message encrypted using ` (r+c)*D`.
This gives us perfect forward secrecy and off-the-record.
One flaw in this is that if Daves secret leaks, not only can he and the people communicating with him be spied upon, but he can receive falsified messages that appear to come from trusted associates.
One fix for this is the following protocol. When Carol initiates communication
with Dave, she encrypts only with the transient public and private keys, but
when Dave replies, he encrypts with another transient key. If Carol can
receive his reply, it really is Carol. This does not work with one way
messages, mail like messages, but every message that is a reply should
automatically contain a reference to the message that it replies to, and every
message should return an ack, so one way communication is a limited case.
Which case can be eliminated by handling one way messages as followed up by
mutual acks. For TCP, the minimum set up and tear down of a TCP connection
needs three messages to set up a connection, and three messages to shut it
down, so if we could multiplex encryption setup into communication setup, which
we should, we could bullet proof authentication. To leak the least metadata,
want to establish keys as early in the process as we can while avoiding the
possibility of DNS attacks against the connection setup process. If
communication only ensues when both sides know four shared secrets, then the
communication can only be forged or intercepted by a party that knows both
sides durable secrets.
We assume on public key per network address and port that when she got the port
associated with the public key, she learned what which of his possibly numerous
public keys will be listening on that port.
Carol calculates `(c+r)*D`. Dave calculates `d*(C+R)`, to arrive at a shared secret, used only once.
That is assuming Carol needs to identify. If this is just a random anonymous client connection, in which Dave responds the same way to every client, all she needs to send is `R` and `D`. She does not need forward secrecy, since not re-using `R`. But chances are Dave does not like this, since chances are he wants to generate a record of the distinct entities applying, so a better solution is for Carol to use as her public key when talking to Dave `hash(c, D)*B`
Sometimes we are going to onion routing, where the general form of an onion routed message is \[`R, D,` encrypted\] with the encryption key being `d*R = r*D`. And inside that encryption may be another message of the same form, another layer of the onion.
But this means that Dave cannot contact Carol. But maybe Carol does not want to be contacted, in which case we need a flag to indicate that this is an uncontactable public key. Conversely, if she does want to be contactable, but her network address changes erratically, she may need to include contact info, a stable server that holds her most recent network address.
Likely all she needs is that Dave knows that she is the same entity as logged in with Dave before. (Perhaps because all he cares about is that this is the same entity that paid Dave for services to be rendered, whom he expects to keep on paying him for services he continues to render.)
The common practical situation, of course is that Carol@Name wants to talk to Dave@AnotherName, and Name knows the address of AnotherName, and AnotherName knows the name of Dave, and Carol wants to talk to Dave as Carol@Name. To make this secure, have to have have a a commitment to the keys of Carol@Name and Dave@AnotherName, so that Dave and Carol can see that Name and AnotherName are showing the same keys to everyone.
The situation we would like to have is a web of public documents rendered immutable by being linked into the blockchain, identifying keys, and a method of contacting the entity holding the key, the method of contacting the identity holding the key being widely available public data.
In this case, Carol may well have a public key only for use with Name, and Dave a public key only for use with AnotherName.
If using `d*C` as the identifier, inadvisable to use a widely known `C`, because this opens attacks via `d`, so `C` should be a per host public key that Carol keeps secret.
The trouble with all these conversations is that they leak metadata it is visible to third parties that `C` is talking to `D`.
Suppose it is obvious that you are talking to Dave, because you had to look up Daves network address, but you do not want third parties to know that *Carol* is talking to Dave. Assume that Daves public key is the only public key associated with this network address.
We would rather not make it too easy for the authorities to see the public key of the entity you are contacting, so would like to have D end to end encrypted in the message. You are probably contacting the target through a rendevous server, so you contact the server on its encrypted key, and it sets up the rendevous talking to the wallet you want to contact on its encrypted key, in which case the messages you send in the clear do not need, and should not have, the public key of the target.
Carol sends `R` to Dave in the clear, and encrypts `C` and `r`\*`D`, using the shared secret key `r`\*`D` = `d`\*`R`
Subsequent communications take place on the shared secret key `(c+r)*D = d*(C+R)`
Suppose there are potentially many public keys associated with this network address, as is likely to be the case if it is a slave wallet performing Pop and Imap like functions. Then it has one primary public key for initiating communications. Call its low value primary keys p and P. Then Carol sends `R` in the clear, followed by `D` encrypted to `r*P = p*R`, followed by `C` and `r*D`, encrypted to `r*D = d*R`.
We can do this recursively, and Dave can return another, possibly transient, public key or public key chain that works like a network address. This implies a messaging system whose api is that you send an arbitrary length message with a public key as address, and possibly a public key as authority, and an event identifier, and then eventually you get an event call back, which may indicate merely that the message has gone through, or not, and may contain a reply message. The messaging system handles the path and the cached shared secrets. All shared secrets are in user space, and messages with different source authentication have different connections and different shared secrets.
Bitcoin just generates a key at random when you need one, and records it. A paper wallet is apt to generate a stack of them in advance. They offer the option of encrypting the keys with a lesser secret, but the master secret has to be stronger, because it has to be strong enough to resist attacks on the public keys.
On reflection, there are situations where this is difficult we would like the customer, rather than the recipients, to generate the obfuscation keys, and pay the money to a public key that consists of the recipients deeply held secret, and a not very deeply held obfuscation shared secret.
This capability is redundant, because we also plan to attach to the transaction a Merkle hash that is the root of tree of Merkle hashes indexed by output, in Merkle-patricia order, each of which may be a random number or may identify arbitrary information, which could include any signed statements as to what the obligations the recipient of the payment has agreed to, plus information identifying the recipient of the payment, in that the signing key of the output is s\*B+recipient_public key. But it would be nice to make s derived from the hash of the recipients offer, since this proves that when he spends the output, he spends value connected to obligations.
Such receipts are likely generated by a slave wallet, which may well do things differently to a master wallet. So let us just think about the master wallet at this point. We want the minimum capability to do only those things a master wallet can do, while still being future compatible with all the other things we want to do.
When we generate a public key and use it, want to generate records of to how it was generated, why, and how it was used. But this is additional tables, and an identifier in the table of public keys saying which additional table to look at.
One meaning per public key. If we use the public key of a name for many purposes, use it as a name. We dont use it as a transaction key except for transactions selling, buying, or assigning that name. But we could have several possible generation methods for a single meaning.
And, likely, the generation method depends on the meaning. So, in the table of public keys we have a single small integer telling us the kind of thing the public key is used for, and if you look up that table indicated by the integer, maybe all the keys in that table are generated by one method, or maybe several methods, and we have a single small integer identifying the method, and we look up the key in that table which describes how to generate the key by that method.
Initially we have only use. Names. And the table for names can have only one method, hence does not need an index identifying the method.
So, a table of public key private key pairs, with a small integer identifying the use and whether the private key has been zeroed out, and a table of names, with the rowid of the public key. Later we introduce more uses, more possible values for the use key, more tables for the uses, and more tables for the generation methods.
A wallet is the data of a Zooko identity. When logged onto the interent is *is* a Zooko triangle identity or Zookos quandrangle identity, but is capable of logging onto the internet as a single short user identity term, or logging on with one peer identity per host. When that wallet is logged in, it is Zooko identity with its master secret and an unlimited pile of secrets and keys generated on demand from a single master secret.
We will want to implement the capability to receive value into a key that is not currently available. So how are we going to handle the identity that the wallet will present?
Well, keys that have a time limited signature that they can be used for certain purposes in the name of another key are a different usage, that will have other tables, that we can add later.
A receive only wallet, when engaging in a transaction, identifies itself as Zooko identity for which it does not have the master key, merely a signed authorization allowing it to operate for that public key for some limited lifetime, and when receiving value, requests that value be placed in public keys that it does not currently possess, in the form of a scalar and the public key that will receive the value. The public key of the transaction output will be s\*B+`D`, where s is the obfuscating value that hides the beneficiery from the public blockchain, B is the base, and `D` is the beneficiary. B is public, s\*B+`D` is public, but s and `D` are known only to the parties to the transaction, unless they make them public.
And that is another usage, and another rule for generating secrets, which we can get around to another day.
The implementation can and will vary from one peer to the next. The canonical form is going to be a Merkle-patricia dac, but the Merkle-patricia dac has to be implemented on top of something, and our first implementation is going to implement the Merkle-patricia dac on top of a particular sqlite database with a particular name, and that name stored in a location global to all programs of a particular user on a particular machine, and thus that database global to all programs of that particular user on that particular machine.
Then we will implement a global consensus Merkle-patricia dac, so that everyone agrees on the one true mapping between human readable names, but the global consensus dac has to exist on top of particular implementations run by particular users on particular machines, whose implementation may well vary from one machine to the next.
A patricia tree contains, in itself, just a pile of bitstreams. To represent actual information, there has to be a schema. It has to be a representation of something equivalent to a pile of records in a pile of database tables. And so, before we represent data, before we create the patricia tree, have to first have a particular working database, a particular implementation, which represents actual data. Particular database first, then we construct universal canonical representation of that data second.
In every release version, starting with the very first release, we are going to have to install a database, if one is not already present, in order that the user can communicate with other users, so we will have no automagic creation of the database. I will manually create and install the first database, and will have a dump file for doing so, assuming that the dump file can create blobs.
A receive only wallet contains the public key and zooko name of its master wallet, optionally a thirty two byte secret, a numbered signed record by the master wallet authorizing it to use that name and receive value on the master wallet secret and its own public key.
A zooko quandrangle identifier consists of a public master key, a globally accepted human readable name globally accepted as identifying that key, a local human readable petname, normally identical to the global name, and an owner selected human readable nickname.
A contact consists of a public key, and network address information where you will likely find a person or program that knows the corresponding private key, or is authorized by that private key to deal with communications.
So, first thing we have to do is create a wxWidgets program that accesses the database, and have it create Zookos triangle identities, with the potential for becoming Zookos quandrangle identities.
# Proving knowledge of the secret key of a public key
Our protocol, unlike bitcoin, has proof of transaction. The proof is private, but can be made public, to support ebay type reputations.
Supposing you are sending value to someone, who has a big important reputation that he would not want damaged. And you want proof he got the value, and that he agreed to goods, services, or payment of some other form of money in return for the value.
Well, if he has a big important reputation, chances are you are not transacting with him personally and individually through a computer located behind a locked door in his basement, to which only he has the key, plus he has a shotgun on the wall near where he keeps the key. Rather, you are transacting with him through a computer somwhere in the cloud, in a big computing center to which far too many people have access, including the computer center management, police, the Russian mafia, the Russian spy agency, and the man who mops the floors.
So, when "he", which is to say the computer in the cloud, sends you public keys for you to put value into on the blockchain, you want to be sure that only he can control value you put into the blockchain. And you want to be able to prove it, but you do not want anyone else except you and he (and maybe everyone who has access to the data center on the cloud) can prove it except you make the data about your transaction public.
You are interacting with a program. And the program probably only has low value keys. It has a key signed by a key signed by a key signed by his high value and closely held key, with start times and timeouts on all of the intermediate keys.
So he should have a key that is not located in the datacenter, and you want proof that only the holder of that key can spend the money maybe an intermediate key with a timeout on it that authorizes it to receive money, which signs an end key that authorizes the program to agree to deals, but not to receive money the intermediate key authorized to receive money presumably not being in the highly insecure data center.
This document is licensed under the [CreativeCommons Attribution-Share Alike 3.0 License](http://creativecommons.org/licenses/by-sa/3.0/){rel="license"}

View File

@ -94,6 +94,24 @@ Since markdown has no concept of a title, Pandoc expects to find the
title in a yaml inline, which is most conveniently put at the top, which
renders it somewhat legible as a title.
Thus the markdown version of this document starts with:
```markdown
---
title: >-
Writing and Editing Documentation
# katex
...
```
## Converting html source to markdown source
In bash
```bash
fn=foobar
git mv $fn.html $fn.md && cp $fn.md $fn.html && pandoc -s --to markdown-smart --eol=lf --wrap=preserve --verbose -o $fn.md $fn.html
```
## Math expressions and katex
@ -154,15 +172,19 @@ For it offends me to put unnecessary fat in html files.
### overly clever katex tricks
$$k \approx \frac{m\,l\!n(2)}{n}%uses\, to increase spacing, uses \! to merge letters, uses % for comments $$
$$k \approx\frac{m\>\ln(2)}{n}%uses\> for a marginally larger increase in spacing and uses \ln, the escape for the well known function ln $$
spacing control
: $$k \approx \frac{m\,l\!n(2)}{n}%uses\, to increase spacing, uses \! to merge letters, uses % for comments $$
$$k \approx\frac{m\>\ln(2)}{n}%uses\> for a marginally larger increase in spacing and uses \ln, the escape for the well known function ln $$
$$ \exp\bigg(\frac{a+bt}{x}\bigg)=\huge e^{\bigg(\frac{a+bt}{x}\bigg)}%use the escape for well known functions, use text size sets$$
size control
: $$ \exp\bigg(\frac{a+bt}{x}\bigg)=\huge e^{\bigg(\frac{a+bt}{x}\bigg)}%use the escape for well known functions, use text size sets$$
$$k\text{, the number of hashes} \approx \frac{m\ln(2)}{n}% \text{} for render as text$$
text within maths
: $$k\text{, the number of hashes,} \approx \frac{m\ln(2)}{n}% \text{} for render as text$$
$$\def\mydef#1{\frac{#1}{1+#1}} \mydef{\mydef{\mydef{\mydef{y}}}}%katex macro $$
katex macro used recursively
: $$\def\mydef#1{\frac{#1}{1+#1}} \mydef{\mydef{\mydef{\mydef{y}}}}%katex macro $$
## Tables
@ -385,10 +407,9 @@ You change a control point, the effect is entirely local, does not
propagate up and down the line.
If, however, you have a long move and a short move, your implied control
point is likely to be in a pathological location, in which case you have to
follow an S curve by a C curve, and manually calculate the first point of
the C to be in line with the last two points of the prior curve.
point is likely to be in a pathological location, so the last control point
of the long curve needs to be close to the starting point of the following
short move.
``` default
M point q point point t point t point ... t point
```
@ -434,7 +455,8 @@ choice of the initial control point and the position of the t points, but you ca
down the line, and changing any of the intermediate t points will change
the the direction the curve takes through all subsequent t points,
sometimes pushing the curve into pathological territory where bezier
curves give unexpected and nasty results.
curves give unexpected and nasty results. Works OK if all your t curves
are of approximately similar length.
Scalable vector graphics are dimensionless, and the `<svg>` tag's
height, width, and ViewBox properties translate the dimensionless