forked from cheng/wallet
7674b879eb
many files updated with trivial fixes. modified: docs/design/TCP.md modified: docs/design/peer_socket.md modified: docs/design/proof_of_share.md modified: docs/estimating_frequencies_from_small_samples.md modified: docs/libraries.md modified: docs/libraries/scripting.md modified: docs/manifesto/May_scale_of_monetary_hardness.md modified: docs/manifesto/bitcoin.md modified: docs/manifesto/consensus.md modified: docs/manifesto/lightning.md modified: docs/manifesto/scalability.md modified: docs/manifesto/social_networking.md modified: docs/manifesto/sox_accounting.md modified: docs/manifesto/triple_entry_accounting.md modified: docs/manifesto/white_paper_YarvinAppendix.md modified: docs/names/multisignature.md modified: docs/names/petnames.md modified: docs/names/zookos_triangle.md modified: docs/notes/big_cirle_notation.md modified: docs/number_encoding.md modified: docs/scale_clients_trust.md modified: docs/setup/contributor_code_of_conduct.md modified: docs/setup/core_lightning_in_debian.md modified: docs/setup/set_up_build_environments.md modified: docs/setup/wireguard.md modified: docs/writing_and_editing_documentation.md
29 KiB
29 KiB
# katex
title: >-
Peer Socket
sidebar: false
notmine: false
...
::: myabstract
[abstract:]{.bigbold}
Most things follow the client server model,
so it makes sense to have a distinction between server sockets
and client sockets. But ultimately what we are doing is
passing messages between entities and the revolutionary
and subversive technologies, bittorrent, bitcoin, and
bitmessage are peer to peer, so it makes sense that all sockets,
however created wind up with same properties.
:::
# factoring
In order to pass messages, the socket has to know a whole lot of state. And
in order handle messages, the entity handling the messages has to know a
whole lot of state. So a socket api is an answer to the question how we
factor this big pile of state into two smaller piles of state.
Each big bundle of state represents a concurrent communicating process.
Some of the state of this concurrent communicating process is on one side
of our socket division, and is transparent to one side of our division. The
application knows the internals of the some of the state, but the internals
of socket state are opaque, while the socket knows the internals of the
socket state, but the internals of the application state are opaque to it.
The socket state machines think that they are passing messages of one class
or a very small number of classes, to one big state machine, which messages
contain an opaque block of bytes that application class serializes and
deserializes.
## layer responsibilities
The sockets layer just sends and receives arbitrary size blocks
of opaque bytes over the wire between two machines.
They can be sent with or without flow control
and with or without reliability,
but if the block is too big to fit in this connection's maximum
packet size, the without flow control and without
reliability option is ignored. Flow control and reliability is
always applied to messages too big to fit in a packet.
The despatch layer parses out the in-reply-to and the
in-regards-to values from the opaque block of bytes and despatches them
to the appropriate application layer state machine, which parses out
the message type field, deserializes the message,
and despatches it to the appropriate fully typed event handler
of that state machine.
# It is remarkable how much stuff can be done without
concurrent communicating processes. Nostr is entirely
implemented over request reply, except that a whole lot
of requests and replies have an integer representing state,
where the state likely winds up being a database rowid.
The following discussion also applies if the reply-to field
or in-regards-to field is associated with a database index
rather than an instance of a class living in memory, and might
well be handled by an instance of a class containing only a database index.
# Representing concurrent communicating processes
node.js represents them as continuations. Rust tokio represents them
as something like continuations. Go represents them lightweight
threads, which is a far more natural and easier to use representation,
but under the hood they are something like continuations, and the abstraction
leaks a little. The abstraction leaks a little in the case you have one
concurrent process on one machine communicating with another concurrent
process on another machine.
Well, in C++, going to make instances of a class, that register
call backs, and the callback is the event. Which had an instance
of a class registered with the callback. Which in C++ is a pointer
to a method of an object, which has no end of syntax that no one
ever manages to get their head around.
So if
dog
is method pointer with the argument bark
, just say
std::invoke(dog, bark)
and let the compiler figure out how
to do it. bark
is, of course, the data supplied by the message
and dog
is the concurrent communicating process plus its registered
callback. And since the process is sequential, it knows the data
for the message that this is a reply to.
A message may contain a reply-to field and or an in-regards-to field.
In general, the in-regards-to field identifies the state machine
on the server and the client, and remains unchanged for the life
of the state machines. Therefore its handler function remains unchanged,
though it may do different things depending
on the state of the state machine and depending on the type of the message.
If the message only has an in-regards-to field, then the callback function for it
will normally be reginstered for the life of the councurrent process (instance)
If it is an in-reply-to, the dispatch mechanism will unregister the handler when it
dispatches the message. If you are going to receive multiple messages in response
to a single message, then you create a new instance.
In C, one represents actions of concurrent processes by a
C function that takes a callback function, so in C++,
a member function that takes a member function callback
(warning, scary and counter intuitive syntax).
Member to function pointers are a huge mess containing
one hundred workarounds, and the best workaround is to not use them.
People have a whole lot of ingenious ways to not use them, for example
a base class that passes its primary function call to one of many
derived classes. Which solution does not seem applicable to our
problem.
std:invoke
is syntax sugar for calling weird and wonderful
callable things - it figures out the syntax for you at compile
time according to the type, and is strongly recommended, because
with the wide variety of C++ callable things, no one can stretch
their brain around the differing syntaxes.
The many, many, clever ways of not using member pointers
just do not cut it, for the return address on a message ultimately maps
to a function pointer, or something that is exactly equivalent to a function pointer.
Of course, we very frequently do not have any state, and you just
cannot have a member function to a static function. One way around
this problem is just to have one concurrent process whose state just
does not change, one concurrent process that cheerfully handles
messages from an unlimited number of correspondents, all using the same
in-regards-to
, which may well be a well known named number, the functional
equivalent of a static web page. It is a concurrent process,
like all the others, and has its own data like all the others, but its
data does not change when it responds to a message, so never expects an
in-reply-to response, or if does, creates a dynamic instance of another
type to handle that. Because it does not remember what messages it sent
out, the in-reply-to field is no use to it.
Or, possibly our concurrent process, which is static and stateless
in memory, nonetheless keeps state in the database, in which case
it looks up the in-reply-to field in the database to find
the context. But a database lookup can hang a thread,
which we do not want to stall network facing threads.
So we have a single database handling thread that sequentially handles a queue
of messages from network facing threads driving network facing concurrent
processes, drives database facing concurrent processes,
which dispatch the result into a queue that is handled by
network facing threads that drive network facing concurrent
processes.
So, a single thread that handles the network card, despatching
message out from a queue in memory, and in from queue in memory, and does not
usually or routinely do memory allocation or release, or handles them itself
if they are standard, common, and known to be capable of being quickly handled,
a single thread that handles concurrent systems that are purely
memory to memory, but could involve dynamic allocation of memory,
and a single thread that handles concurrent state machines that do database
lookups and writes and possibly dynamic memory allocation, but do not
directly interact with the network, handing that task over to concurrent
state machines in the networking thread.
So a message comes in through the wire, where it is handled
by a concurrent process, probably a state machine with per connection
state, though it might have substates, child concurrent processes,
for reassembling one multipart message without hanging the next,
It then passes that message to a state machine in the application
layer, which is queued up in the queue for the thread or threads appropriate
to its destination concurrent process, and receives messages from those threads,
which it then despatches to the wire.
A concurrent process is of course created by another
concurrent process, so when it completes,
does a callback on the concurrent process that created it,
and any concurrent processes it has created
are abruptly discarded. So our external messages and events
involve a whole lot of purely internal messages and events.
And the event handler has to know what internal object this
message came from,
which for external messages is the in-regards-to field,
or is implicit in the in-reply-to field.
If you could be receiving events from different kinds of
objects about different matters, well, you have to have
different kinds of handlers. And usually you are only
receiving messages from only one such object, but in
irritatingly many special cases, several such objects.
But it does not make sense to write for the fully general case
when the fully general case is so uncommon, so we handle this
case ad-hoc by a special field, which is defined only for this
message type, not defined as a general quality of all messages.
It typically makes sense to assume we are handling only one kind
of message, possibly of variant type, from one object, and in
the other, special, cases, we address that case ad hoc by additional
message fields.
But if we support std:variant
, there is a whole lot of overlap
between handling things by a new variant, and handling things
by a new callback member.
The recipient must have associated a handler, consisting of a
call back and an opaque pointer to the state of the concurrent process
on the recipient with the messages referenced by at least one of
these fields. In the event of conflicting values, the reply-to takes
precedence, but the callback of the reply-to has access to both its
data structure, and the in-regards-to dat structure, a pointer to which
is normally in its state. The in-regards-to being the state machine,
and the in-reply-to the event that modifies the
state of the state machine.
When we initialize a connection, we establish a state machine
at both ends, both the application factor of the state machine,
and the socket factor of the state machine.
When I say we are using state machines, this is just the
message handling event oriented architecture familiar in
programming graphical user interfaces.
Such a program consists of a pile of derived classes whose
base classes have all the machinery for handling messages.
Each instance of one of these classes is a state machine,
which contains member functions that are event handlers.
So when I say "state machine", I mean a class for handling
events like the many window classes in wxWidgets.
One big difference will be that we will be posting a lot of events
that we expect to trigger events back to us. And we will want
to know which posting the returning event came from. So we will
want to create some data that is associated with the fired event,
and when a resulting event is fired back to us, we can get the
correct associated data, because we might fire a lot of events,
and they might come back out of order. Gui code has this capability,
but it is rarely used.
## Implementing concurrent state machines in C++
Most of this is me re-inventing Asio, which is part of the
immense collection of packages of Msys2, Obviously I would be
better off integrating Asio than rebuilding it from the ground up
But I need to figure out what needs to be done, so that I can
find the equivalent Asio functionality.
Or maybe Asio is bad idea. Boost Asio was horribly broken.
I am seeing lots of cool hot projects using Tokio, not seeing any cool
hot projects use Asio.
If Bittorrent DHT library did their own
implementation of concurrent communicating processes,
maybe Asio is broken at the core
And for flow control, I am going to have to integrate Quic,
though I will have to fork it to change its security model
from certificate authorities to Zooko names. You can in theory
easily plug any kind of socket into Asio,
but I see a suspicious lack of people plugging Quic into it,
because Quic contains a huge amount of functionality that Asio
knows nothing of. But if I am forking it, can probably ignore
or discard most of that functionality.
Gui code is normally single threaded, because it is too much of
a bitch to lock an instance of a message handling class when one of
its member functions is handling a message (the re-entrancy problem).
However the message plumbing has to know which class is going
to handle the message (unless the message is being handled by a
stateless state machine, which it often is) so there is no reason
the message handling machinery could not atomically lock the class
before calling its member function, and free it on return from
its member function.
State machines (message handling classes, as for example
in a gui) are allocated in the heap and owned by the message
handling machinery. The base constructor of the object plugs it
into the message handling machinery. (Well, wxWidgets uses the
base constructor with virtual classes to plug it in, but likely the
curiously recurring template pattern would be saner
as in ATL and WTL.)
This means they have to be destroyed by sending a message to the message
handling machinery, which eventually results in
the destructor being called. The destructor does not have to worry
about cleanup in all the base classes, because the message handling
machinery is responsible for all that stuff.
Our event despatcher does not call a function pointer,
because our event handlers are member functions.
We call an object of type std::function
. We could also use pointer to member,
which is more efficient.
All this complicated machinery is needed because we assume
our interaction is stateful. But suppose it is not. The request‑reply
pattern, where the request contains all information that
determines the reply is very common, probably the great majority
of cases. This corresponds to an incoming message where the
in‑regards‑to field and in‑reply‑to field is empty,
because the incoming message initiates the conversation,
and its type and content suffices to determine the reply. Or the incoming message
causes the recipient to reply and also set up a state machine,
or a great big pile of state machines (instances of a message handling class),
which will handle the lengthy subsequent conversation,
which when it eventually ends results in those objects being destroyed,
while the connection continues to exist.
In the case of an incoming message of that type, it is despatched to
a fully re-entrant static function on the basis of its type.
The message handling machinery calls a function pointer,
not a class member.
We don't use, should not use, and cannot use, all the
message handling infrastructure that keeps track of state.
## receive a message with no in‑regards‑to field, no in‑reply‑to field
This is directed to a re-entrant function, not a functor,
because re‑entrant and stateless.
It is directed according to message type.
### A message initiating a conversation
It creates a state machine (instance of a message handling class)
sends the start event to the state machine, and the state machine
does whatever it does. The state machine records what message
caused it to be created, and for its first message,
uses it in the in‑reply‑to field, and for subsequent messages,
for its in‑regards‑to field,
### A request-reply message.
Which sends a message with the in-reply-to field set.
The recipient is expected to have a hash-map associating this field
with information as to what to do with the message.
#### A request-reply message where counterparty matters.
Suppose we want to read information about this entity from
the database, and then write that information. Counterparty
information is likely to be needed to be durable. Then we do
the read-modify write as a single sql statement,
and let the database serialize it.
## receive a message with no in‑regards‑to field, but with an in‑reply‑to field
The dispatch layer looks up a hash-map table of functors,
by the id of the field and id of the sender, and despatches the message to
that functor to do whatever it does.
When this is the last message expected in‑reply‑to the functor
frees itself, removes itself from the hash-map. If a message
arrives with no entry in the table, it is silently dropped.
## receive a message with an in‑regards‑to field, with or without an in‑reply‑to to field.
Just as before, the dispatch table looks up the hash-map of state machines
(instances of message handling classes) and dispatches
the message to the stateful message handler, which figures out what
to do with it according to its internal state. What to do with an
in‑reply‑to field, if there is one, is something the stateful
message handler will have to decide. It might have its own hashmap for
the in‑reply‑to field, but this would result in state management and state
transition of huge complexity. The expected usage is it has a small
number of static fields in its state that reference a limited number
of recently sent messages, and if the incoming message is not one
of them, it treats it as an error. Typically the state machine is
only capable of handling the
response to its most recent message, and merely wants to be sure
that this is a response to its most recent message. But it could
have shot off half a dozen messages with the in‑regards‑to field set,
and want to handle the response to each one differently.
Though likely such a scenario would be easier to handle by creating
half a dozen state machines, each handling its own conversation
separately. On the other hand, if it is only going to be a fixed
and finite set of conversations, it can put all ongoing state in
a fixed and finite set of fields, each of which tracks the most
recently sent message for which a reply is expected.
## A complex conversation.
We want to avoid complex conversations, and stick to the
request‑reply pattern as much as possible. But this is apt to result
in the server putting a pile of opaque data (a cookie) its reply,
which it expects to have sent back with every request.
And these cookies can get rather large.
Bob decides to initiate a complex conversation with Carol.
He creates an instance of a state machine (instance of a message
handling class) and sends a message with no in‑regards‑to field
and no in‑reply‑to field but when he sends that initial message,
his state machine gets put in, and owned by,
the dispatch table according to the message id.
Carol, on receiving the message, also creates a state machine,
associated with that same message id, albeit the counterparty is
Bob, rather than Carol, which state machine then sends a reply to
that message with the in‑reply‑to field set, and which therefore
Bob's dispatch layer dispatches to the appropriate state machine
(message handler)
And then it is back and forth between the two stateful message handlers
both associated with the same message id until they shut themselves down.
## factoring layers.
A layer is code containing state machines that receive messages
on one machine, and then send messages on to other code on
the same machine. The sockets layer is the code that receives
messages from the application layer, and then sends them on the wire,
and the code that receives messages from the wire,
and sends messages to the application layer.
But a single state machine at the application level could be
handling several connections, and a single connection could have
several state machines running independently, and the
socket code should not need to care.
We have a socket layer that receives messages containing an
opaque block of bytes, and then sends a message to
the application layer message despatch machinery, for whom the
block is not opaque, but rather identifies a message type
meaningful for the despatch layer, but meaningless for the socket layer.
The state machine terminates when its job is done,
freeing up any allocated memory,
but the connection endures for the life of the program,
and most of the data about a connection endures in
an sql database between reboots.
The connection is a long lived state machine running in
the sockets layer, which sends and receives what are to it opaque
blocks of bytes to and from the dispatch layer, and the dispatch
layer interprets these blocks of bytes as having information
(message type, in‑regards‑to field and in‑reply‑to field)
that enables it to despatch the message to a particular method
of a particular instance of a message handling class in C++,
corresponding to a particular channel in Go.
And these message handling classes are apt to be short lived,
being destroyed when their task is complete.
Because we can have many state machines on a connection,
most of our state machines can have very little state,
typically an infinite receive loop, an infinite send receive loop,
or an infinite receive send loop, which have no state at all,
are stateless. We factorize the state machine into many state machines
to keep each one manageable.
## Comparison with concurrent interacting processes in Go
These concurrent communicating processes are going to
be sending messages to each other on the same machine.
We need to model Go's goroutines.
A goroutine is a function, and functions always terminate --
and in Go are unceremoniously and abruptly ended when their parent
function ends, because they are variables inside its dataspace,
as are their channels.
And, in Go, a channel is typically passed by the parent to its children,
though they can also be passed in a channel.
Obviously this structure is impossible and inapplicable
when processes may live, and usually do live,
in different machines.
The equivalent of Go channel is not a connection. Rather,
one sends a message to the other to request it create a state machine,
which will correspond to the in-regards-to message, and the equivalent of a
Go channel is a message type, the in-regards-to message id,
and the connection id. Which we pack into a single class so that we
can use it the way Go uses channels.
The sockets layer (or another state machine on the application layer)
calls the callback routine with the message and the state.
The sockets layer treats the application layer as one big state
machine, and the information it sends up to the application
enables the application layer to despatch the event to the
correct factor of that state machine, which we have factored into
as many very small, and preferably stateless, state machines as possible.
We factor the potentially ginormous state machine into
many small state machines (message handling classes),
in the same style as Go factors a potentially ginormous Goroutine into
many small goroutines.
The socket code being a state machine composed of many
small state machines, which communicates with the application layer
over a very small number of channels,
these channels containing blocks of bytes that are
opaque to the socket code,
but are serialized and deserialized by the application layer code.
From the point of view of the application layer code,
it is many state machines,
and the socket layer is one big state machine.
From the point of view of the socket code, it is many state machines,
and the application layer is one big state machine.
The application code, parsing the the in-reply-to message id,
and the in-regard-to message id, figures out where to send
the opaque block of bytes, and the recipient deserializes,
and sends it to a routine that acts on an object of that
deserialized class.
Since the sockets layer does not know the internals
of the message struct, the message has to be serialized and deserialized
into the corresponding class by the dispatch layer,
and thence to the application layer.
Go code tends to consist of many concurrent processes
continually being spawned by a master concurrent process,
and themselves spawning more concurrent processes.
# flow control and reliability
If we want to transmit a big pile of data, a big message, well,
this is the hard problem, for the sender has to throttle according
to the recipient's readiness to handle it and the physical connections capability to transmit it.
Quic is a UDP protocol that provides flow control, and the obvious thing
to handle bulk data transfer is to fork it to use Zooko based keys.
[Tailscale]:https://tailscale.com/blog/how-nat-traversal-works
"How to communicate peer-to-peer through NAT firewalls"{target="_blank"}
[Tailscale] has solved a problem very similar to the one I am trying to solve,
albeit their solutions rely on a central human authority,
which authority they ask for money and they recommend:
> If you’re reaching for TCP because you want a
> stream‑oriented connection when the NAT traversal is done,
> consider using QUIC instead. It builds on top of UDP,
> so we can focus on UDP for NAT traversal and still have a
> nice stream protocol at the end.
But to interface QUIC to a system capable of handling a massive
number of state machines, going to need something like Tokio,
because we want the thread to service other state machines while
QUIC is stalling the output or waiting for input. Indeed, no
matter what, if we stall in the socket layer rather than the
application layer, which makes life a whole lot easier for the
application programmer, going to need something like Tokio.
Or we could open up Quic, which we have to do anyway
to get it to use our keys rather than enemy controlled keys,
and plug it into our C++ message passing layer.
On the application side, we have to lock each state machine
when it is active. It can only handle one message at at time.
So the despatch layer has to queue up messages and stash them somewhere,
and if it has too many messages stashed,
it wants to push back on the state machine at the application layer
at the other end of the wire. So the despatch layer at the receiving end
has to from time to time tell the despatch layer at the sending end
"I have n
bytes in regard to message 'Y', and can receive m
more.
And when the despatch layer at the other end, which unlike the socket
layer knows which state machine is communicating with which,
has more than that amount of data to send, it then blocks
and locks the state machine at its end in its send operation.
The socket layer does not know about that and does not worry about that.
What it worries about packets getting lost on the wire, and caches
piling up in the middle of the wire.
It adds to each message a send time and a receive time
and if the despatch layer wants to send data faster
than it thinks is suitable, it has to push back on the despatch layer.
Which it does in the same style.
It tells it the connection can handle up to m
further bytes.
Or we might have two despatch layers, one for sending and one for
receiving, with the send state machine sending events to the receive state
machine, but not vice versa, in which case the socket layer
can block the send layer.
# Tokio
Most of this machinery seems like a re-implementation of Tokio-rust,
which is a huge project. I don't wanna learn Tokio-rust, but equally
I don't want to re-invent the wheel.
Or perhaps we could just integrate QUICs internal message
passing infrastructure to our message passing infrastructure.
It probably already supports a message passing interface.
Instead of synchronously writing data, you send a message to it
to write some data, and hen it is done, it calls a callback.
# Minimal system
Prototype. Limit global bandwidth at the application
state machine level -- they adjust their policy according to how much
data is moving, and they spread the non response outgoing
messages out to a constant rate (constant per counterparty,
and uniformly interleaved.)
Single threaded, hence no state machine locking.
Tweet style limit on the size of messages, hence no fragmentation
and re-assembly issue. Our socket layer becomes trivial - it just
send blobs like a zeromq socket.
If you are trying to download a sackload of data, you request a counterparty to send a certain amount to you at a given rate, he immediately responds (without regard to global bandwidth limits) with the first instalment, and a promise of further instalments at a certain time)
Each instalment records how much has been sent, and when, when the next instalment is coming, and the schedule for further instalments.
If you miss an instalment, you nack it after a delay. If he receives
a nack, he replaces the promised instalments with the missing ones.
The first thing we implement is everyone sharing a list of who they have successfully connected to, in recency order, and everyone keeps everyone else's list, which catastrophically fails to scale, and also how up to date their counter parties are with their own list, so that they do not have
endlessly resend data (unless the counterparty has a catastrophic loss of data, and requests everything from the beginning.)
We assume everyone has an open port, which is sucks intolerably, but once that is working we can handle ports behind firewalls, because we are doing UDP. Knowing who the other guy is connected to, and you are not, you can ask him to initiate a peer connection for the two of you, until you have
enough connections that the keep alive works.
And once everyone can connect to everyone else by their public username, then we can implement bitmessage.