diff --git a/.gitattributes b/.gitattributes index afbb2c0..d1a00e1 100644 --- a/.gitattributes +++ b/.gitattributes @@ -36,47 +36,33 @@ Makefile text eol=lf encoding=utf-8 *.vcxproj.filters text eol=crlf encoding=utf-8 whitespace=trailing-space,space-before-tab,tabwidth=4 *.vcxproj.user text eol=crlf encoding=utf-8 whitespace=trailing-space,space-before-tab,tabwidth=4 -#Don't let git screw with pdf files -*.pdf -text - # Force binary files to be binary -*.gif -textn -diff -*.jpg -textn -diff -*.jepg -textn -diff -*.png -textn -diff -*.webp -textn -diff -############################################################################### -# Set default behavior for command prompt diff. -# -# This is need for earlier builds of msysgit that does not have it on by -# default for csharp files. -# Note: This is only used by command line -############################################################################### -#*.cs diff=csharp +# Archives +*.7z filter=lfs diff=lfs merge=lfs -text +*.br filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.tar filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text -############################################################################### -# Set the merge driver for project and solution files -# -# Merging from the command prompt will add diff markers to the files if there -# are conflicts (Merging from VS is not affected by the settings below, in VS -# the diff markers are never inserted). Diff markers may cause the following -# file extensions to fail to load in VS. An alternative would be to treat -# these files as binary and thus will always conflict and require user -# intervention with every merge. To do so, just uncomment the entries below -############################################################################### -#*.sln merge=binary -#*.csproj merge=binary -#*.vbproj merge=binary -#*.vcxproj merge=binary -#*.vcproj merge=binary -#*.dbproj merge=binary -#*.fsproj merge=binary -#*.lsproj merge=binary -#*.wixproj merge=binary -#*.modelproj merge=binary -#*.sqlproj merge=binary -#*.wwaproj merge=binary +# Documents +*.pdf filter=lfs diff=lfs merge=lfs -text + +# Images +*.gif filter=lfs diff=lfs merge=lfs -text +*.ico filter=lfs diff=lfs merge=lfs -text +*.jpg filter=lfs diff=lfs merge=lfs -text +*.jpeg filter=lfs diff=lfs merge=lfs -text +*.pdf filter=lfs diff=lfs merge=lfs -text +*.png filter=lfs diff=lfs merge=lfs -text +*.psd filter=lfs diff=lfs merge=lfs -text +*.webp filter=lfs diff=lfs merge=lfs -text + +# Fonts +*.woff2 filter=lfs diff=lfs merge=lfs -text + +# Other +*.exe filter=lfs diff=lfs merge=lfs -text ############################################################################### # diff behavior for common document formats diff --git a/README.html b/README.html index f42bdb7..b4c8405 100644 --- a/README.html +++ b/README.html @@ -138,41 +138,49 @@ build the program and run unit test for the first time, launch the Visual Studio X64 native tools command prompt in the cloned directory, then:

winConfigure.bat

Should the libraries change in a subsequent pull you will need

-
pull -f --recurse-submodules
+
git pull
+rem you get a status message indicating libraries have been updated.
+git pull -force --recurse-submodules
 winConfigure.bat
-

winConfigure.bat also configures the repository you just created to use +

in order to rebuild the libraries.

+

The --force is necessary, because winConfigure.bat changes +many of the library files, and therefore git will abort the pull.

+

winConfigure.bat also configures the repository you just created to use .gitconfig in the repository, causing git to to implement GPG signed commits – because cryptographic software is under attack from NSA -entryists, and shills, who seek to introduce backdoors.

+entryists and shills, who seek to introduce backdoors.

This may be inconvenient if you do not have gpg installed and set up.

.gitconfig adds several git aliases:

  1. git lg to display the gpg trust information for the last four commits. For this to be useful you need to import the repository public key public_key.gpg into gpg, and locally sign that key.
  2. -
  3. git fixws to standardise white space to the project standards
  4. -
  5. git graph to graph the commit tree
  6. +
  7. git graph to graph the commit tree with signing status
  8. git alias to display the git aliases.
-
# To verify that the signature on future pulls is unchanged.
-gpg --import  public_key.gpg
-gpg --lsign 096EAE16FB8D62E75D243199BC4482E49673711C
-# We ignore the Gpg Web of Trust model and instead use
-# the Zooko identity model.
-# We use Gpg signatures to verify that remote repository
-# code is coming from an unchanging entity, not for
-# Gpg Web of Trust.  Web of Trust is too complicated
-# and too user hostile to be workable or safe.
-# Never --sign any Gpg key related to this project.  --lsign it.
-# Never check any Gpg key related to this project against a
-# public gpg key repository. It should not be there.
-# Never use any email address on a gpg key related to this project
-# unless it is only used for project purposes, or a fake email,
-# or the email of an enemy.
+
# To verify that the signature on future pulls is
+# unchanged.
+gpg --import  public_key.gpg
+gpg --lsign 096EAE16FB8D62E75D243199BC4482E49673711C
+

We ignore the Gpg Web of Trust model and instead use the Zooko +identity model.

+

We use Gpg signatures to verify that remote repository code +is coming from an unchanging entity, not for Gpg Web of Trust. Web +of Trust is too complicated and too user hostile to be workable or safe.

+

Never –sign any Gpg key related to this project. –lsign it.

+

Never check any Gpg key related to this project against a public +gpg key repository. It should not be there.

+

Never use any email address on a gpg key related to this project +unless it is only used for project purposes, or a fake email, or the +email of an enemy. We don’t want Gpg used to link different email +addresses as owned by the same entity, and we don’t want email +addresses used to link people to the project, because those +identities would then come under state and quasi state pressure.

To build the documentation in its intended html form from the markdown files, execute the bash script file docs/mkdocs.sh, in an environment where -pandoc is available. On Windows, if Git Bash and Pandoc has been -installed, you should be able to run a shell file in bash by double clicking on it.

+pandoc is available. On Windows, if Git Bash and Pandoc +has been installed, you should be able to run this shell +file in bash by double clicking on it.

Pre alpha release, which means it does not yet work even well enough for it to be apparent what it would do if it did work.

diff --git a/RELEASE_NOTES.html b/RELEASE_NOTES.html index b4e91c2..8143673 100644 --- a/RELEASE_NOTES.html +++ b/RELEASE_NOTES.html @@ -64,7 +64,7 @@ text-align: left;

Release Notes

To build and run README

-

pre alpha documentation (mostly a wish list)

+

pre alpha documentation (mostly a wish list) (In order to read these on this local system, you must first execute the document build script mkdocs.sh, with bash, sed and pandoc)

This software is pre alpha and should not yet be released. It does not work well enough to even show what it would do if it was working

diff --git a/docs/client_server.md b/docs/client_server.md deleted file mode 100644 index 324d1e2..0000000 --- a/docs/client_server.md +++ /dev/null @@ -1,1152 +0,0 @@ ---- -title: Client Server Data Representation -... - -# related - -[Replacing TCP, SSL, DNS, CAs, and TLS](replacing_TCP.html){target="_blank"} - -# clients and hosts, masters and slaves - -A slave does the same things for a master as a host does for a client. - -The difference is how identity is seen by third parties. The slaves identity -is granted by the master, and if the master switches slaves, third parties -scarcely notice. It the same identity. The client's identity is granted by the -host, and if the client switches hosts, the client gets a new identity, as for -example a new email address. - -If we use [Pake and Opaque](libraries.html#opaque-password-protocol) for client login, then all other functionality of -the server is unchanged, regardless of whether the server is a host or a -slave. It is just that in the client case, changing servers is going to change -your public key. - -Experience with bitcoin is that a division of responsibilities, as between Wasabi wallet and Bitcoin core, is the way to go - that the peer to peer networking functions belong in another process, possibly running on -another machine, possibly running on the cloud. - -You want a peer on the blockchain to be well connected with a well -known network address. You want a wallet that contains substantial value -to be locked away and seldom on the internet. These are contradictory -desires, and contradictory functions. Ideally one would be in a basement -and generally turned off, the other in the cloud and always on. - -Plus, I have come to the conclusion that C and C++ just suck for -networking apps. Probably a good idea to go Rust for the slave or host. -The wallet is event oriented, but only has a small number of concurrent -tasks. A host or slave is event oriented, but has a potentially very large -number of concurrent tasks. Rust has no good gui system, there is no -wxWidgets framework for Rust. C++ has no good massive concurrency -system, there is no Tokio for C++. - -Where do we put the gui for controlling the slave? In the master, of -course. - -# the select problem - -To despatch an `io` event, the standard is `select()`. Which standard sucks -when you have a lot of sockets to manage. - -The recommended method for servers with massive numbers of clients is overlapped IO, of which Wikipedia says: - -> Utilizing overlapped I/O requires passing an `OVERLAPPED` structure to API functions that normally block, including ReadFile(), WriteFile(), and Winsock's WSASend() and WSARecv(). The requested operation is initiated by a function call which returns immediately, and is completed by the OS in the background. The caller may optionally specify a Win32 event handle to be raised when the operation completes. Alternatively, a program may receive notification of an event via an I/O completion port, *which is the preferred method of receiving notification when used in symmetric multiprocessing environments or when handling I/O on a large number of files or sockets*. The third and the last method to get the I/O completion notification with overlapped IO is to use ReadFileEx() and WriteFileEx(), which allow the User APC routine to be provided, which will be fired on the same thread on completion (User APC is the thing very similar to UNIX signal, with the main difference being that the signals are using signal numbers from the historically predefined enumeration, while the User APC can be any function declared as "void f(void* context)"). The so-called overlapped API presents some differences depending on the Windows version used.[1] -> -> Asynchronous I/O is particularly useful for sockets and pipes. -> -> Unix and Linux implement the POSIX asynchronous I/O API (AIO) - -Which kind of hints that there might be a clean mapping between Windows `OVERLAPPED` and Linux `AIO*` - -Because generating and reading the select() bit arrays takes time -proportional to the largest fd that you provided for `select()`, the `select()` -scales terribly when the number of sockets is high. - -Different operating systems have provided different replacement functions -for select. These include `WSApoll()`, `epoll()`, `kqueue()`, and `evports()`. All of these give better performance than select(), all give O(1) performance -for adding a socket, removing a socket, and for noticing that a socket is -ready for IO. (Well, `epoll()` does when used in edge triggered (`EPOLLET`) -mode. It has a `poll()` compatibility mode which fails to perform when you -have a large number of file descriptors) - -Windows has `WSAPoll()`, which can be a blocking call, but if it blocks -indefinitely, the OS will send an alert callback to the paused thread -(asynchronous procedure call, APC) when something happens. The -callback cannot do another blocking call without crashing, but it can do a -nonblocking poll, followed by a nonblocking read or write as appropriate. -This analogous to the Linux `epoll()`, except that `epoll()` becomes ungodly -slow, rather than crashing. The practical effect is that "wait forever" -becomes "wait until something happens that the APC did not handle, or -that the APC deliberately provoked") - -Using the APC in Windows gets you behavior somewhat similar in effect -to using `epoll()` with `EPOLLET` in Linux. Not using the APC gets you -behavior somewhat similar in effect to Linux `poll()` compatibility mode. - -Unfortunately, none of the efficient interfaces is a ubiquitous standard. Windows has `WSAPoll()`, Linux has `epoll()`, the BSDs (including Darwin) have `kqueue`(), … and none of these operating systems has any of the others. So if you want to write a portable high-performance asynchronous application, you’ll need an abstraction that wraps all of these interfaces, and provides whichever one of them is the most efficient. - -The Libevent api wraps various unix like operating system efficient replacements, but unfortunately missing from its list is the windows efficient replacement. - -The way to make them all look alike is to make them look like event -handlers that have a pool of threads that fish stuff out of a lock free -priority queue of events, create more threads capable of handling this kind -of event if there is a lot of stuff in the queue and more threads are needed, -and release all threads but one that sleeps on the queue if the queue is -empty and stays empty. - -Trouble is that windows and linux are just different. Except both support -select, but everyone agrees that select really sucks, and sucks worse the -more connections. - -A windows gui program with a moderate number of connections should use windows asynchronous sockets, which are designed to deliver events on the main windows gui event loop, designed to give you the benefits of a separate networking thread without the need for a separate networking thread. Linux does not have asynchronous sockets. Windows servers should use overlapped io, because they are going to need ten thousand sockets, they do not have a window - -Linux people recommended a small number of threads, reflecting real hardware threads, and one edge triggered `epoll()` per thread, which sounds vastly simpler than what windows does. - -I pray that that wxWidgets takes care of mapping windows asynchronous sockets to their near equivalent functionality on Linux. - -But writing a server/host/slave for Linux is fundamentally different to -writing one for windows. Maybe we can isolate the differences by having -pure windows sockets, startup and shutdown code, pure Linux sockets, -startup and shutdown code, having the sockets code stuff data to and from -lockless priority queues (which revert to locking when a thread needs to -sleep or startup) Or maybe we can use wxWidgets. Perhaps worrying -about this stuff is premature optimization. But the samples directory has -no service examples, which suggests that writing services in wxWidgets is -a bad idea. And it is an impossible idea if we are going to write in Rust. - -Tokio, however, is a Rust framework for writing services, which runs on -both Windows and Linux. Likely Tokio hides the differences, in a way -optimal for servers, as wxWidgets hides them in a way optimal for guis. - -# the equivalent of RAII in event oriented code - -Futures, promises, and cooperative multi tasking. - -Is asynch await. Implemented in a Rust library. - -This is how a server can have ten thousand tasks dealing with ten thousand clients. - -Implemented, in C++20 as co_return, co_await, and co_yield, co_yield -being the C++ equivalent of Rust’s poll. But C++20 has no standard -coroutine libraries, and various people’s half baked ideas for a coroutine -library don’t seem to be in actual use solving real problems just yet, while -actual people are using the Rust library to solve real world problems. - -I have read reviews by people attempting to use C++20 co-routines, and -the verdict is that they are useless and unusable, - -And we should use fibres instead. Fibres? - -Boost fibres provide multiple stacks on a single thread of execution. But -the consensus is that [fibres just massively suck](https://devblogs.microsoft.com/oldnewthing/20191011-00/?p=102989). - -But, suppose we don't use stack. We just put everything into a struct and disallow recursion (except you create a new struct) Then we have the functionality of fibres and coroutines, with code continuations. - -Word is that co_return, co_await, and co_yield do stuff that is -complicated, difficult to understand, and frequently surprising and not -what you want, but with std::future, you can reasonably straightforwardly -do massive concurrency, provided you have your own machinery for -scheduling tasks. Maybe we do massive concurrency with neither fibres, -nor coroutines -- code continuations or close equivalent. - -> we get not coroutines with C++20; we get a coroutines framework. -> This difference means, if you want to use coroutines in C++20, -> you are on your own. You have to create your coroutines based on -> the C++20 coroutines framework. - -C++20 coroutines seem to be designed for the case of two tasks each of -which sees the other as a subroutine, while the case that actually matters -in practice is a thousand tasks holding a relationship with a thousand clients. -(Javascript’s async). It is far from obvious how one might do what -Javascript does using C++20 coroutines, while it is absolutely obvious -how to do it with Goroutines. - -## Massive concurrency in Rust - -Well supported, works, widely used. - -The way Rust does things is that the input that you are waiting for is itself a -future, and that is what drives the cooperative multi tasking engine. - -When the event happens, the future gets flagged as fulfilled, so the next -time the polling loop is called, co_yield never gets called. And the polling -loop in your await should never get called, except the event arrives on the -event queue. The Tokio tutorial explains the implementation in full detail. - -From the point of view of procedural code, await is a loop that endlessly -checks a condition, calls yield if the condition is not fulfilled, and exits the -loop when the condition is fulfilled. But you would rather it does not -return from yield/poll until the condition is likely to have changed. And -you would rather the outermost future pauses the thread if nothing has -changed, if the event queue is empty. - -The right way to implement this is have the stack as a tree. Not sure if -Tokio does that. C++20 definitely does not – but then it does not do -anything. It is a pile of matchsticks and glue, and they tell you to build -your own boat. - -[Tokio tutorial]:https://tokio.rs/tokio/tutorial/async - -The [Tokio tutorial] discusses this and tells us how they dealt with it. - -> Ideally, we want mini-tokio to only poll futures when the future is -> able to make progress. This happens when a resource that the task -> is blocked on becomes ready to perform the requested operation. If -> the task wants to read data from a TCP socket, then we only want -> to poll the task when the TCP socket has received data. In our case, -> the task is blocked on the given Instant being reached. Ideally, -> mini-tokio would only poll the task once that instant in time has -> passed. - -> To achieve this, when a resource is polled, and the resource is not -> ready, the resource will send a notification once it transitions into a -> ready state. - -The mini Tokio tutorial shows you how to implement your own efficient -futures in Rust, and, because at the bottom you are always awaiting an -efficient future, all your futures will be efficient. You have, however -all the tools to implement an inefficient future, and if you do, there will be a -lot of spinning. So if everyone is inefficiently waiting on a future that is -inefficiently waiting on a future that is waiting on a network event or -timeout, and the network events and timeout futures are implemented -efficiently, you are done. - -If you cheerfully implement an inefficient future, which however calls an -efficient future, it stops spinning. - -> When a future returns Poll::Pending, it must ensure that the wake -> is signalled at some point. Forgetting to do this results in the task -> hanging indefinitely - -Multithreading, as implemented in C++, Rust and Julia do not scale to -huge numbers of concurrent processes the way Go does. - -notglowing, a big fan of Rust, tells me, - -> No, but like in most other languages, you can solve that with -> asynchronous code for I/O bound operations. Which is the kind -> of situation where you’d consider Go anyways. - -> With Tokio, I can spawn an obscene number of Tasks doing I/O -> work asynchronously, and only use a few threads. - -> It works really well, and I have been writing async code using -> Tokio for a project I am working on. - -> Async/await semantics are the next best thing after Goroutines. - -> Frameworks like Actix-web leverage Tokio to get web -> server performance superior to any Go framework I know of. - -> Go’s concurrency model might be superior, but Rust’s lightweight -> runtime, lack of GC pauses that can cause inconsistent -> performance, and overall low level control give it the edge it needs -> to beat Go in practical scenarios. - -I looked up Tokio and Actix, looks like exactly what the doctor ordered. - -So, if you need asynch, you need Rust. C++ is build your own boat out of -matchsticks and glue. - -The await asynch syntax and semantics are, in effect, multithreading on cheap -threads that only have cooperative yielding. - -So you have four real threads, and ten thousand tasks, the effective -equivalent of ten thousand cheap "threads". - -I conjecture that the underlying implementation is that the asynch await -keywords turn your stack into a tree, and each time a branch is created and -destroyed, it costs a small memory allocation/deallocation. - -With real threads, each thread has its own full stack, and stacks can be -costly, while with await/asynch, each task is just a small branch of the tree. -Instead of having one top of stack, you have a thousand leaves with one -root at start of thread, while having ten thousand full stacks would bring -your program to a grinding halt. - -It works like an event oriented program, except the message pumps do not -have to wait for events to complete. Tasks that are waiting around, such as -the message pump itself, can get started on the next thing, while the -messages it dispatched are waiting around. - -As recursing piles more stuff on the stack, asynching branches the stack, -while masses of threads give you masses of stacks, which can quickly -bring your computer to a grinding halt. - -Resource acquisition, disposition, and release depend on network and timer -events. - -RAII guarantees that the resource is available to any function that may -access the object (resource availability is a class invariant, -eliminating redundant runtime tests). It also guarantees that all -resources are released when the lifetime of their controlling object -ends, in reverse order of acquisition. - -In a situation where a multitude of things can go wrong, but seldom do, -you would, without RAII, wind up with an exponentially large number of -seldom tested code paths for backing out of the situation. RAII means that -all the possibilities are automagically taken care of in a consistent way, -and you don’t have to think about all the possible combinations and -permutations. - -RAII plus exceptions shuts down an potentially exponential explosion of -code paths. - -Our analog of the situation that RAII deals with is that we dispatch -messages A and B, and create the response handler for B in the response -handler for A. But A might fail, and B might get a response before A. - -With await asynch, we await A, then await B, and if B has already arrived, -our await for B just goes right ahead, never calling yield, but removing -itself from the awake notifications. - -Go already has technology and idiom for message handling. Maybe the -solution for this problem is not to re-invent Go technology in C++ using -[perfect forwarding] and lambda functions, but to provide a message -interface to Go in C. - -But there is less language mismatch between Rust and C++ than between -Go and C++. - -And maybe C++20 has arrived in time, have not checked the availability -of co_await, co_return, and co_yield. - -[perfect forwarding]:https://cpptruths.blogspot.com/2012/06/perfect-forwarding-of-parameter-groups.html - -On the other hand, Go’s substitute for RAII is the defer statement, which -presupposes that a resource is going to be released at the same stack level -as it was acquired, whereas when I use RAII I seldom know, and it is often -impossible to predict, at what stack level a resource should be released, -because the resource is owned by a return value, typically created by a -constructor. - -On checking out Go’s implementation of message handling, it is all things -for which C++ provides the primitives, and Go has assembled the -primitives into very clean and easy to use facilities. Which facilities are -not hard to write in C++. - -The clever solution used by Go is typed, potentially bounded [channels], -with a channel being able to transmit channels, and the select statement. -You can also do all the hairy shared memory things you do in C++, with -less control and elegance. But you should not. - -[channels]:https://golang.org/doc/effective_go#concurrency -"concurrency" - -What makes Go multithreading easy is channels and select. - -This an implementation of Communicating Sequential Processes, which is -that input, output, and concurrency are useful and effective primitives, that -can elegantly and cleanly express algorithms, even if they are running on a -computer that physically can only execute a single thread, that -concurrency as expressed by channels is not merely a safe way of -multithreading, but a clean way of expressing intent and procedure to the -computer. - -Goroutines are less than a thread, because they are using some multiplexed -thread’s stack. They live in an environment where a stable and small pool -of threads is despatched to function calls, and when a goroutine is stopped, -because it is attempting to communicate, its stack state, which is -usually quite small, is stashed somewhere without destroying and creating an -entire thread. They need to lightweight, because used to express -algorithms, with parallelism not necessarily being an intended side effect. - -The relationship between goroutines and node.js continuations is that a -continuation is a small packet of state that will receive an event, and a -paused goroutine is a small packet of state that will receive an event. Both -approaches seem comparably successful in expressing concurrent algorithms, -though node.js is single threaded. - -Node.js uses async/await, and by and large, more idiots are successfully -using node.js than successfully using Go, though go solutions are far more -lightweight than node.js solutions, and in theory Rust should be still -lighter. - -But Rust solutions should be even lighter weight than Go solutions. - -So maybe I do need to invent a C++ idiom for this problem. Well, a Rust -library already has the necessary idiom. Use Tokio in Rust. Score for -powerful macro language and sum types. Language is more expandable. - -# Unit Test - -It is hard to unit test a client server system, therefore, most people -unit test using mocks: Fake classes that do not really interact with the -external world replace the real classes, if you perform that part of the -unit test that deals with external interaction with clients and servers. -Your unit test runs against a dummy client and a dummy server – thus the -unit test code necessarily differs from the real code. - -But this will not detect bugs in the real class being mocked, which therefore -has to be relatively simple and unchanging – not that it is necessarily all -that practical to keep it simple and unchanging, consider the messy -irregularity of TCP. - -Any message is an event, and it is a message between an entity identified -by one elliptic point, and an entity identified by another elliptic point. - -We intend that a process is identified by many elliptic points, so that it has -a stable but separate identity on every server. Which implies that it can -send messages to itself, and these will look like messages from outside. -The network address will be an opaque object. Which is not going to help -us unit test code that has to access real network addresses, though our -program undergoing unit test can perform client operations on itself, -assuming in process loopback is handled correctly. Or maybe we just have -to assume a test network, and our unit test program makes real accesses -over the real internet. - -But our basic architecture is that we have an opaque object representing a -communication node, it has a method that creates a connection, and you -can send a message on a connection, and receive a reply event on that -connection. - -Sending a message on a connection creates the local object that will handle -the reply, and this local object’s lifetime is managed by hash code tables -- -or else this local object is stored in the database, written to disk in the -event that sends the message, and read from disk in the event that handles -the reply to the message. - -Object representing server 🢥 Object representing connection to server 🢥 -object representing request-response. - -We send messages between entities identifed by their elliptic points, we -get events on the receiving entity when these events arrive, generate -replies, and get an event on the sending entity when the reply is received. - -And one of the things in these messages will be these entities and -information about these entities. - -So we create our universal class, which may be mocked, whereby a client -takes an opaque data structure representing a server, and makes a request, -thereby creating a channel, on which channel it can create additional -requests. It can then receive a reply on this channel, and make further -requests, or replies to replies, sequentially on this channel. - -We then layer this class on top of this class – as for example setting up a -shared secret, timing out channels and establishing new ones, so we have -as much real code as possible, implementing request object classes in -terms of request object classes, so that we can mock any one layer in the -hierarchy, - -At the top layer, we don’t know we are re-using channels, and don’t know we -are re-using secrets – we don’t even keep track of the transient secret -scalar and transient shared secret point, because that might be discarded and -reconstructed. All this stuff lives in an opaque object representing the -current state of our communication with the server, which is, at the topmost -level, identified a database record, and/or an objected instantiated from a -database record and/or a handle to that object and/or a hash code to that -handle. - -Since we are using an opaque object of an opaque type, we can freely mix -fake objects with real ones. Unit test will result in fake communications -over fake channels with fake external clients and servers. - -# Factorizing the problem - -Why am I reinventing OMEMO, XMPP, and OTR? - -These projects are quite large, and have a vast pile of existing code. - -On the other hand OTR seems an unreasonably complicated way of -adding on what you get for free with perfect forward secrecy, -authentication without signing is just the natural default for perfect -forward secrecy, and signing has to be added on top. You get OTR (Off the -Record) for free just by leaving stuff out. XMPP is a presence protocol is -just name service, which is integral to any social networking system. Its -pile of existing code supports Jitsi’s wonderful video conferencing system, -which would be intolerably painful to reinvent. - -And OMEMO just does not do the job. It guarantees you have a private -room with the people you think you have a private room with, but how did -you come to decide you wanted a private room with those people and not -others? It leaves the hard part of the problem out of scope. - -The problem is factoring a heap of problems that lack obvious boundaries -between one problem and the next. You need to find the smallest factors -that are factors of all these big problems – find a solution to your problems -that is a suitable module of a solution to all these big problems. - -But you don’t want to factorize all the way down,otherwise when you -want a banana, you will get a banana tree, a monkey, and a jungle. You -want the largest factors that are common factors of more than one problem -that you have to solve. - -And a factor that we identify is that we create a shared secret with a -lifetime of around twenty minutes or so, longer than the lifetime of the -TCP connections and longer than the query-response interactions, that -ensures: - -* Encryption (eavesdroppers learn almost nothing) -* Authentication (the stable identity of the two wallets, no man in the - middle attack) -* Deniability (the wallets have proof of authentication, but no proof - that they can show to someone else) -* Perfect forward secrecy (if the wallet secrets get exposed, their - communications remain secret) - -Another factor we identify is binding a group of remote object method -calls together to a single one that must fail together, of which problem -a reliability layer on top of UDP is a special case. But we do not want -to implement our own UDP reliability layer, when [QUIC], has already been -developed and widely deployed. We notice that to handle this case, we -need not an event object referenced by an event handle and an event -hashcode, but rather an array of event objects referenced by an event -handle, an event hashcode, and the sequence number within that vector. - -## streams, secrets, messages, and authentication - -To leak minimal metadata, we should encrypt the packets with -XChaCha20-SIV, or use a random nonce. (Random nonces -are conveniently the default case libsodium's `crypto_box_easy`). The port -should be random for any one server and any one client, to make it slightly -more difficult to sweep up all packets using our encryption. Any time we -distribute new IP information for a server, also distribute new open port information. - -XChaCha20-SIV is deterministic encryption, and deterministic encryption -will leak information unless every message sent with a given key is -guaranteed to be unique - in effect, we have the nonce inside the -encryption instead of outside. Each packet must contain a forever -incrementing packet number, which gets repeated but with a different send -time, and perhaps an incremented resend count, on reliable messaging -resends. This gets potentially complicated, hard to maintain, and easy to break. - -Neither protocol includes authentication. The crypto_box wraps the -authentication with the encryption. You need to add the authenticator after -encryption and before decryption, as crypto_box does. The principle of -cryptographic doom is that if you don't, someone will find some clever -way of using the error messages your higher level protocol generates to -turn it into a decryption/encryption oracle. - -Irritatingly, `crypto_box_easy_*` defaults to XSalsa, and I prefer XChaCha. - -However `crypto_box_curve25519xchacha20poly1305.*easy.*` in -`crypto_box_curve25519xchacha20poly1305.h` wraps it all together. You -just have to call those instead of `crypto_box_easy.*` Which is likely to be -a whole lot easier and safer than wrapping XChaCha20-SIV. - -For each `crypto_box` function, there is a corresponding -`crypto_box_curve25519xchacha20poly1305` function, apart from -some special cases that you probably should not be using anyway. - -So, redefine each crypto_box function to use XChaCha20 - -```C++ -namespace crypto_box{ - const auto& «whatever» = crypto_box_curve25519xchacha20poly1305_«whatever»; -} -``` - -Nonces are intended to be communicated in the clear, thus sequential -nonces inevitably leak metadata. Don't use sequential nonces. Put the -packet number and message number or numbers inside the authenticated encryption. - -Each packet of a packetized message will contain the windowed -message id of the larger message of which it is part, the id of the thread or -thread pool that will ultimately consume it, the size of the larger message -of which it is part, the number of packets in the larger message of which it -is part, and its packet and byte position within that larger message. The -repetition is required to handle out of order messages and messages with -lost packets. - -For a single packet message, or a multi message packet, each message -similarly. - -Message ids will be windowed sequential, and messages lost in entirety will be reliably resent because their packets will be reliably resent. - -If we allocate each packet buffer from the heap, and free it when it is used, -this does not make much of a dent in performance until we are processing -well over a Gib/s. - -So we can worry about efficient allocation after we have released software -and it is coming under heavy load. - -Another more efficient way would be to have a pool of 16KiB blocks, -allocate one of them to a connection whenever that connection needs it, -allocate packet buffers sequentially in a 16KiB block, incrementing a -count, free up packet buffers in the bloc when a packet is dealt with, -decrementing the count. When the count returns to zero, it goes back to the -free pool, which is accessed in lifo order. Every few seconds the pool is -checked, and if there are number of buffers that have not been used in the -last few minutes, we free them. We organize things that inactive -connection has no packet buffers associated with it. But this is fine tuning -and premature optimization. - -The recipient will nack the sender about any missing packets within a -multipacket message. The sender will not free up any memory containing -packets that have not been acked, and the receiver will not free up any -memory that has not been handled by the thread that ultimately receives -the data. - -Experimenting with memory allocation and deallocation times, looks like -a sweet spot is to allocate in 16KiB blocks, with the initial fifo queue -being allocated with two 16KiB blocks as soon as activity starts, and the -entire fifo queue deallocated when it is empty. If we allocated, deallocated -when activity stops, and re-allocated every millisecond, it would not -matter much, and we will be doing it far less often than that, because we -will keeping the buffer around for at least one round trip time. If every -active queue has on average sixty four KiB, and we have sixteen thousand -simultaneous active connections, only costs a gigabyte. This rather -arbitrary guesstimated value seems good enough that it does not waste too -much memory, nor too much time. Memory for input output streams -seems cheap, might as well cheerfully spend plenty, perhaps a lot more -than necessary, so as to avoid hitting other limits. - -We want connections, the shared secrets, identity data, and connection parameters, hanging around for a very long time of inactivity, because -they are something like logins. We don't want their empty data stream -buffers hanging around. Re-establishing a connection takes hundreds -of times longer that allocating and deallocating a buffer. - -We also want, in a situation of resource starvation, to cut back the -connections that are the heaviest users to wait. They should not send, until -told space is available, and we just don't make it available, because their -buffer got emptied out, then thrown away, and they just have to wait their -turn till the server clears them to get a new one allocated when they send data. - -If the server has too much work, a whole lot of connections get idled for -longer and longer periods, and while idled, their buffers are discarded. - -When we have a real world application facing real world heavy load, then -we can fuss about fine tuning the parameters. - -The packet stream that is being resolved (the packets, their time of arrival and -sending, that they were acked, nacked, ack status, and all that, goes into a -first in first out random access queue, composed of fixed size blocks larger than -the packet size. - -We hope that C++ implements large random access fifo queues with -mmap. If it does not, will eventually have to write our own. - -Each block starts with metadata that enables the stream of fixed sized -blocks to be interpreted as a stream of variable sized packets and the -metadata about those packets. The block size in bits, and the size of the -block and packet metadata, but initially only 4K byte, 32K kilobit blocks -will be supported. The format of metadata that is referenced or defined -within packets is also negotiated, though initially the only format will be -format number one. Obviously each side is free to define its own format for -the metadata outside of packets, but it has to be the same size at both -ends. Each party can therefore demand any metadata size it wants, subject -to some limit, for metadata outside the packets. - -The packets are aligned within the blocks so that 512 bit blocks to be -encrypted or decrypted are aligned with the blocks of the queue so the -blocks of the queue are always a multiple of 512 bits, 32 bytes, and block -size is given as a multiple of 32 bytes. This will result in an average of -sixteen bytes of space wasted positioning each packet to a boundary. - -The pseudo random streams of encrypting information are applied with an -offset that depends on the absolute position in the queue, which is why -the queues have to have packets in identical position in both queues. -Each block header contains unwindowing values for any windowed values in -the packets and packet metadata, which unwindowing data is a mere 64 bits, -but, since block and packet metadata size gets negotiated on each -connection, this can be expanded without breaking backwards -compatibility. The format number for packet references to metadata -implies an unwindow size, but we initially assume that any connection only -sends less that 2^64 512 bit packets, rather packets plus the metadata -required to describe those packets takes up less than 2^73 bits, -corresponding to a thousand Mbps - -The packet position in the queue is the same at both ends, and is -unwindowed in the block header. - -The fundamental architecture of QUIC is that each packet has its own -nonce, which is an integer of potentially sixty two bits, expressed in -a form that is short for smaller integers, which is essentially my -design, so I expect that I can use a whole lot of QUIC code. - -It negotiates the AES session once per connection, and thereafter, it -is sequential nonces all the way. - -Make a new one time secret from a new one time public key every time -you start a stream (pair of one way streams). Keeping one time secrets -around for multiple streams, although it can in theory be done safely, gets -startlingly complicated really fast, with the result that nine times out of ten -it gets done unsafely. - -Each two way stream is a pair of one way streams. Each encryption packet -within a udp packet will have in the clear its stream number and a window -into its stream position, the window size being log base two of the position -difference between all packets in play, plus two, rounded up to the nearest -multiple of seven. Its stream number is an index into shared secrets and stream -states associated with this IP and port number. - -If initiating a connection in the clear (and thus unauthenticated) Alice -sends Bob (in a packet that is not considered part of a stream) a konce (key used once, single use -elliptic point $A_o$). She follows it, in the same packet and in a new -encrypted but unauthenticated stream, proving knowledge of the scalar -corresponding to the elliptic point by using the the shared secret -$a_oB_d = b_dA_o$, where $B_d$ is Bob’s durable public key and $b_d$ his -durable secret key. In the encrypted but unauthenticated stream, she sends -$A_d$, her durable public key, (which may only be durable until the -application is shut down) initiating a stream encrypted with -$(a_o+a_d)B_d =b_d(A_o+A_d)$, or more precisely, symmetrically encrypted -with the 384 bit hash of that elliptic point and one way stream number). - -All this stuff happens during the handshake, and when we allocate a -receive buffer, we have a shared secret. The sender may only send up to -the size of the receive buffer, and has to wait for acks which will -announce more receive buffer. - -There is no immediate reason to provide the capability to create a new -differently authenticated stream from within an authenticated stream, for -the use cases for that are better dealt with by sending authorizations for -them existing authentication signed by the other party. Hence one to one -mapping between port number and durable authenticating elliptic point, -with each authenticated stream within that port number deriving its shared -secret from a konce covers all the use cases that occur to -me. We don’t care about making creating a login relationship efficient. - -When the OS gives you a packet, it gives you the handle you associated -with that network address and port number, and the protocol layer of -application then has to expand that into the receive stream number and -packet position in the stream. After decrypting the streams within a packet, -it then maps stream id and message id to the application layer message -handler id. It passes the position of data within the message, but not the -position within the stream because you don’t want too many copies of the -shared secret floating around, and because the application does not care. - -Message data may arrive out of sequence within a message, but the -protocol layer always sends the data in sequence to the application, and -usually the application only wants complete messages, and does not -register a partial message handler anyway. - -Each application runs its own instance of the protocol layer, and each -application is, as far as it knows or cares, sending messages identified by -their receiver message handler and reply message handler to a party -identified by its zooko id. A message always receives a reply, even if the -reply is only “message acknowledged”, “message abandoned”, “message -not acknowledged” “timeout”, “graceful shutdown of connection”, or -“ungraceful shutdown of connection”, The protocol layer maps these into -encrypted sequential streams and onto message numbers within the stream -when sending them out, and onto application ids, application message -handlers and receiving zooko ids when receiving them. - -But, if a message always receives a reply, the sender may want to know -which message is being replied to. Which implies it always receives a -handle to the sent message when it gets the reply. Which implies that the -protocol layer has to provide unique reply ids for all messages in play -where a substantive reply is expected from the recipient. (“Message -received” does not need a reply id, because implicit in the reliable -transport layer, but special casing such messages to save a few bytes per -message adds substantially to complexity. Easier to have the recipient ack -all packets and all messages every round trip time, even though acking -messages is redundant, and identifying every message is redundant.) - -This is implies that the protocol layer gives every message a unique sixty -four bit windowed id, with the window size sufficient to cover all -messages in play, all messages that have neither been acked nor -abandoned. - -Suppose we are transferring one dozen eight terabyte disks in tiny fifty -byte messages. And suppose that all these messages are in play, which -seems unlikely unless we are communicating with someone on Pluto. Well, -then we will run out of storage for tracking every message in play, -but suppose we did not. Then forty bits would suffice, a sixty four bit -message id suffices. And, since it is windowed, using the same windowing -as we are using for stream packet 384 bit ids, we can always increase it -without changing the protocol on the wire when we get around to sending -messages between galaxies. - -A windowed value represents an indefinitely large unsigned integer, but -since we are usually interested in tracking the difference between two such -values, we define substraction and comparison on windowed values to -give us ordinary signed integers, the largest precision integer than we can -conveniently represent on our machine. Which will always suffice, for by -the time we get around to enormous tasks, we will have enormous -machines. - -Because each application runs its own protocol layer, it is simpler, though -not essential, for each application to have its own port number on its -network address and thus its own streams on that port number. All -protocol layers use a single operating system udp layer. All messages -coming from a single application in a single session are authenticated with -at least that session and application, or with an id durable between -sessions of the application, or with an id durable between the user using -different applications on the same machine, or with an id durable to the -user and used on different machines in different applications, though the -latter requires a fair bit of potentially hostile user interface. - -If the application wants to use multiple identities during a session, it -initiates a new connection on a new port number in the clear. One session, -one port number, at most one identity. Multiple port numbers, however, do -not need nor routinely have, multiple identities for the same run of the -application. - -[QUIC]: https://github.com/private-octopus/picoquic - -If we implement a [QUIC] large object layer, (and we really should not do -this until we have working code out there that runs without it) it will -consist of reliable request responses on top of groups of unreliable request -responses, in which case the unreliable request responses will have a -group request object that maps from their UDP origin and port numbers, -and a sequence number within that group request object that maps to an -item in an array in the group request operator. - -### speed - -The fastest authenticated encryption algorithm is OCB - and on high end -hardware, AES256OCB. - -AES256OCB, despite having a block cipher underneath, has properties -that make it possible to have the same API as xchacha20poly1305. -(Encrypts and authenticates arbitrary length, rather than block sized, messages.) - -[OCB patents were abandoned in February 2021](https://www.metzdowd.com/pipermail/cryptography/2021-February/036762.html) - -One of these days I will produce a fork of libsodium that supports ``crypto_box_ristretto25519aes256ocb.\*easy.\*`, but that is hardly urgent. -Just make sure the protocol negotiation allows new ciphers to be dropped in. - -# Getting something up and running - -I need to get a minimal system up that operates a database, does -encryption, has a gui, does unit test, and synchronizes data with other -system. - -So we will start with a self licking icecream: - -We aim for a system that has a per user database identifying public keys -related to user controlled secrets, and a local machine database relating -public keys to IP numbers and port numbers. A port and IP address -identifies a process, and a process may know the underlying secrets of -many public key. - -The gui, the user interface, will allow you to enter a secret so that it is hot and online, optionally allow you to make a subordinate wallet, a ready wallet. - -The system will be able to handle encryption, authentication, signatures, -and perfect forward secrecy. - -The system will be able to merge and floodfill the data relating public -keys to IP addresses. - -We will not at first implement capabilities equivalent to ready wallets, -subordinate wallets, and Domain Name Service. We will add that in once -we have flood fill working. - -Floodfill will be implemented on top of a Merkle-patricia tree -implemented with, perhaps, grotesque inefficiency by having nodes in the -database where the address of each node consists of the bit length of the -address as the primary sort key, then the address, and then the record, the -content of the node identified by this is hashes, the type, and the addresses -of the two children, and the hashes of the two children. The hash of the -node is the hash of the hashes of its two children, ignoring its address. -(The hash of the leaf nodes take account of the leaf node’s address, but the -hashes of the tree nodes do not) - -Initially we will get this working without network communication, merely -with copy paste communication. - -An event always consists of a bitstream, starting with a schema identifier. -The schema identifier might be followed by a shared secret identifiers, -which identifies the source and destination key, or followed by direct -identification of the source and destination key, plus stuff to set up a -shared secret. - -# Terminology - -Handle: -: A handle is short opaque identifier that corresponds to quite small -positive integer that points to an object in an `std::vector` containing a -sequence of identically sized objects. A handle is reused almost -immediately. When a handle is released, the storage it references goes -into a `std::priority_queue` for reuse, with handles at the start of the -of the vector being reused first. If the priority queue is empty, the -vector grows to provide space for another handle. The vector never -shrinks, though unused space at the end will eventually get paged out. A -handle is a member of a class with a static member that points to that -vector, and it has a member that provides a reference to an object in the -vector. It is faster than fully dynamic allocation and deallocation, but -still substantially slower than static or stack allocation. It provides -the advantages of shared pointer, with far lower cost. Copying or -destroying handles has no consequences, they are just integers, but -releasing a handle still has the problem that there may be other copies of -it hanging around, referencing the same underlying storage. Handles are -nonowning – they inherent from unsigned integers, they are just unsigned -integers plus some additional methods, and the static members -`std::vectortable;` and `std::priority_queuetable;>unused_handles;` - -Hashcode: -: A rather larger identifier that references an `std::unordered_map`, -which maps the hashcode to underlying storage, usually through a handle, -though it might map to an `std::unique_ptr`. Hashcodes are sparse, unlike -handles, and are reused infrequently, or never, so if your only reference -to the underlying storage is through a hashcode, you will not get -unintended re-use of the underlying storage, and if you do reference after -release, you get an error – the hashcode will complain it no longer maps -to a handle. Hashcodes are owning, and the hashmap has the semantics of -`unique_ptr` or `shared_ptr`. When an event is fired, it supplies a -hashcode that will be associated with the result of that fire and -constructs the object that hashcode will reference. When the response -happens, the object referenced by the hashcode is found, and the command -corresponding to the event type executed. In a procedural program, the -stack is the root data structure driving RAII, but in an event oriented -program, the stack gets unwound between events, so for data structures -that persist between events, but do not persist for the life of the -program, we need some other data structure driving RAII, and that data -structure is the database and the hashtables, the hashtables being the -database for stuff that is ephemeral, so we don’t want the overheads of -actually doing data to disk operations. - -Hot Wallet: -: The wallet secret is in memory, or the secret from which it is derived and chain of links by which that secret is derived is in memory. - -Cold Wallet, paper wallet: -: The wallet secret is not in memory nor on non volatile storage in a computer connected to the internet. High value that is intended to be kept for a long time should be controlled by a cold wallet. - -Online Wallet: -: Hot and online. Should usually be a subordinate wallet for your cold wallet. Your online subordinate wallet will commonly recieve value for your cold wallet, and will only itself control funds of moderate value. An online wallet should only be online in one machine in one place at any one time, but many online wallets can speak on behalf of one master wallet, possibly a cold wallet, and receive value for that wallet - -Ready Wallet: -: Hot and online, and when you startup, you don’t have to perform the difficult task of entering the secret because when it is not running, the secret is on disk. The wallet secret remains in non volatile storage when you switch off the computer, and therefore is potentially vulnerable to theft. It is automatically loaded into memory as the wallet, the identity, with which you communicate. - -Subordinate Wallet: -: Can generate public keys to receive value on behalf of another wallet, -but cannot generate the corresponding secret keys, while that other wallet, -perhaps currently offline, perhaps currently existing only in the form of a -cold wallet, that the other wallet has can generate the secret keys for. -Usually has an authorization lasting three months to speak in that other -wallet’s name, or until that other wallet issues a new authorization. A -wallet can receive value for any other wallet that has given it a secret and -authorization but only spend value for itself. - -# The problem - -Getting a client and a server to communicate is apt to be surprisingly complicated. This is because the basic network architecture for passing data around does not correspond to actual usage. - -TCP-IP assumes a small computer with little or no non volatile storage, and infinite streams, but actual usage is request-response, with the requests and responses going into non volatile storage. - -When a bitcoin wallet is synchronizing with fourteen other bitcoin wallets, there are a whole lot of requests and replies floating around all at the same time. We need a model based on events and message objects, rather than continuous streams of data. - -IP addresses and port numbers act as handles and hashcodes to get data from one process on one computer to another process on another computer, but within the process, in user address space, we need a representation that throws away the IP address, the port number, and the positional information and sequence within the TCP-IP streams, replacing it with information that models the process in ways that are more in line with actual usage. - -# Message objects and events - -Any time we fire an event, send a request, we create a local data structure identified by a handle and by the twofiftysix bit hashcode of the request, the pair of entities communicating. The response to the event references either the hashcode, or the handle, or both. Because handles are local, transient, live only in ram, and are not POD, handles never form part of the hash describing the message object, even though the reply to a request will contain the handle. - -We don’t store a conversation as between me and the other guy. Rather, we -store a conversation as between Ann and Bob, with the parties in lexicographic -order. When Ann sees the records on her computer, she knows she is Ann, when -Bob sees the conversation on his computer, he knows he is Bob, and Carol sees -the records, because they have been made public as part of a review, she -knows that Ann is reviewing Bob, but the records have the same form, and lead -to the same Merkle root, on everyone’s computer. - -Associated with each pair of communicating entities is a durable secret -elliptic point, formed from the wallet secrets of the parties communicating, -and a transient and frequently changing secret elliptic point. These secrets -never leave ram, and are erased from ram as soon as they cease to be -needed. A hash formed from the durable secret elliptic point is associated -with each record, and that hash goes into non volatile storage, where it is -unlikely to remain very secret for very long, and is associated with the -public keys, in lexicographic order, of the wallets communicating. The -encryption secret formed from the transient point hides the public key -associated with the durable point from eves droppers, but the public key -that is used to generate the secret point goes into nonvolatiles storage, -where it is unlikely to remain very secret for very long. - -This ensures that the guy handing out information gets information about who is interested in his information. It is a privacy leak, but we observe that sites that hand out free information on the internet go to great lengths to get this information, and if the protocol does not provide it, will engage in hacks to get it, such as Google Analytics, which hacks lead to massive privacy violation, and the accumulation of intrusive spying data in vast centralized databases. Most internet sites use Google Analytics, which downloads an enormous pile of JavaScript on your browser, which systematically probes your system for one thousand and one privacy holes and weaknesses and reports back to Google Analytics, which then shares some of their spy data with the site that surreptitiously downloaded their enormous pile of hostile spy attack code onto your computer. - -***[Block Google Analytics](./block_google_analytics.html)*** - -We can preserve some privacy on a client by the wallet initiating the connection deterministically generating a different derived wallet for each host that it wants to initate connection with, but if we want push, if we want peers that can be contacted by other peers, have to use the same wallet for all of them. - -A peer, or logged in, connection uses one wallet for all peers. A client connection without login, uses an unchanging, deterministically generated, probabilistically unique, wallet for each server. If the client has ever logged in, the peer records the association between the deterministically generated wallet, and wallet used for peer or logged in connections, so that if the client has ever logged in, that widely used wallet remains logged in forever -albeit the client can throw away that wallet, which is derived from his master secret, and use a new wallet with a different derivation from his master secret. - -The owner of a wallet has, in non volatile storage, the chain by which each wallet is derived from his master secret, and can regenerate all secrets from any link in that chain. His master secret may well be off line, on paper, while some the secrets corresponding to links in that chain are in non volatile storage, and therefore not very secure. If he wants to store a large amount of value, or final control of valuable names, he has them controlled by the secret of a cold wallet. - -When an encrypted message object enters user memory, it is associated with a handle to a shared transient volatile secret, and its decryption position in the decryption stream, and thus with a pair of communicating entities. How this association is made depends on the details of the network connection, on the messy complexities of IP and of TCP-IP position in the data stream, but once the association is made, we ignore that mess, and treat all encrypted message objects alike, regardless of how they arrived. - -Within a single TCP-IP connection, we have a message that says “subsequent -encrypted message objects will be associated with this shared secret and thus -this pair of communicating entities, with the encryption stream starting at -the following multiple of 4096 bytes, and subsequent encryption stream -positions for subsequent records are assumed to start at the next block of a -power of two bytes where the block is large enough to contain the entire -record.”, but on receiving records following that message, we associate it -with the shared secret and the encryption stream position, and pay no further -attention to IP numbers and position within the stream. Once the association -has been made, we don’t worry which TCP stream or UDP port number the record -came in on or its position within the stream. We identify the communicating -entities involved by their public keys, not their IP address. When we decrypt -the message, if it is a response to a request, it has the handle and/or the -hash of the request. - -A large record object could take quite a long time downloading. So when the -first part arrives, we decrypt the first part, to find the event handler, -and call the progress event of the handler, which may do nothing, every time -data arrives. This may cause the timeout on the handler to be reset. - -If we are sending a message object after long delay, we construct a new shared secret, so the response to a request may come over a new TCP connection, different from the one on which it was sent, with a new shared secret, and a position in the decryption stream, unrelated to the shared secret, the position in the decryption stream, and the IP stream, under which a request was sent. Our message object identity is unrelated to the underlying internet protocol transport. Its destination is a wallet, and its ID on the process of the wallet is its hashtag. - -# Handles - -I have above suggested various ad hoc measures for preventing references to reused handles, but a more robust and generic solution is hash codes. You generate fresh hash codes cyclicly, checking each fresh hash code to see if it is already in use, so that each communication referencing a new event handle or new shared secret also references a new hash code. The old hash code is de-allocated when the handle is re-used, so a new hashcode will reference the new entity pointed to by the handle, and the old hashcode fail immediately and explicitly. - -Make all hashcodes thirty two bits. That will suffice, and if scaling bites, we are going to have to go to multiple host processes anyway. Our planned protocol already allows you to be redirected to an arbitrary host wallet speaking on behalf of a master wallet that may well be in cold storage. When we have enormous peers on the internet hosting hundreds of millions of cients, they are going to have to run tens of thousands of processes. Our hashtags only have meaning within a single process and our wallet identifier address space is enormous. Further, a single process can have multiple wallets associated with it, and we could differentiate hashes by their target wallet. - -Every message object has a destination wallet, which is an online wallet, which should only be online in one host process in one machine, and an encrypted destination event hashcode. The fully general form of a message object has a source public key, a hashcode indicating a shared secret plus a decryption offset, or is prefixed by data to generate a shared secret and decryption offset, and, if a response to a previous event, an event hashcode that has meaning on the destination wallet. However, on the wire, when the object is travelling by IP protocol, some of these values are redundant, because defaults will have already been created associated with the IP connection. On the disk and inside the host process, it is kept in the clear, so does not have the associated encryption data. At the user process level, and in the database, we are not talking to IP addresses, but to wallets. The connection between a wallet and an IP address is only dealt with when we are telling the operating system to put message objects on the wire, or they are being delivered to a user process by the operating system from the wire. On the wire, having found the destination IP and port of the target wallet, the public key of the target wallet is not in the clear, and may be implicit in the port (dry). - -Any shared secret is associated with two hash codes, one being its value on the other machine, and two public keys. But under the dry principle, we don’t keep redundant data around, so the redundant data is virtual or implicit. - -# Very long lived events - -If the event handler refers to a very long lived event (maybe we are waiting for a client to download waiting message objects from his host, email style, and expect to get his response through our host, email style) it stores its associated pod data in the database, deletes it from the database when the event is completed, and if the program restarts, the program reloads it from the database with the original hashtag, but probably a new handle. Obviously database access would be an intolerable overhead in the normal case, where the event is received or timed out quickly. - -# Practical message size limits - -Even a shitty internet connection over a single TCP-IP connection can usually manage 0.3Mbps, 0,035Mps, and we try to avoid message objects larger than one hundred KB. If we want to communicate a very large data structure, we use a lot of one hundred KB objects, and if we are communicating the blockchain, we are probably communicating with a peer who has at least a 10Mbps connection, so use a lot of two MB message objects. - -1Mbps download, 0.3 Mbps upload, Third world cell phone connection, third world roach hotel connection, erratically usable.\ -2-4 Mbps Basic Email Web Surfing Video Not Recommended\ -4--6 Mbps Good Web Surfing Experience, Low Quality Video Streaming (720p)\ -6--10 Mbps Excellent Web Surfing, High Quality Video Streaming (1080p)\ -10-20 Mbps High Quality Video Streaming, High Speed Downloads / Business-Grade Speed - -A transaction involving a single individual and a single recipient will at -a minimum have one signature (which identifies one UTXO, rhocoin, making it -a TXO, hence $4*32$ bytes, two utxos, unused rocoins, hence $2*40$ bytes, and -a hash referencing the underlying contract, hence 32 bytes – say 256 bytes, -2048 bits. Likely to fit in a single datagram, and you can download six -thousand of them per second on a 12Mbs connection. - -On a third world cell phone connection, downloading a one hundred kilobyte object has high risk of failure, and busy TCP_IP connection has short life expectancy. - -For communication with client wallets, we aim that message objects received from a client should generally be smaller than 20KB, and records sent to a client wallet should generally be smaller than one hundred KB. For peer wallets and server wallets, generally smaller than 2MB. Note that bittorrent relies on 10KB message objects to communicate potentially enormous and complex data structures, and that the git protocol communicates short chunks of a few KB. Even when you are accessing a packed file over git, you access it in relatively small chunks, though when you access a git repository holding packed files over https protocol, you download the whole, potentially enormous, packed file as one potentially enormous object. But even with git over https, you have the alternative of packing it into a moderate number of moderately large packed files, so it looks as if there is a widespread allergy to very large message objects. Ten K is the sweet spot, big enough for context information overheads to be small, small enough for retries to be non disruptive, though with modern high bandwidth long fat pipes, big objects are less of a problem, and streamline communication overheads. - -# How many shared secrets, how often constructed - -The overhead to construct a shared secret is 256 bits and 1.25 milliseconds, so, on a ten Megabit per second connection, if the CPU spent half its time establishing shared secrets, it could establish one secret every three hundred microseconds, eg, one secret every three thousand bits. - -Since a minimal packet is already a couple of hundred bits, this does not give a huge amount of room for a DDoS attack. But it does give some room. We really should be seriously DDoS resistant, which implies that every single incoming packet needs to be quickly testable for validity, or cheap to respond to. A packet that requires the generation of a shared secret it not terribly expensive, but it is not cheap. - -So, we probably want to impose a cost on a client for setting up a shared -secret, And since the server could have a lot of clients, we want the cost -per server to be small, which means cost per client to be mighty small in -the legitimate non DDoS scenario – it only is going to bite in the DDoS -scenario. Suppose the server might have a hundred thousand clients, each -with sixteen kilobytes of connection data, for a total of about two -gigabyes of ram in use managing client connections. Well then, setting up -shared secrets for all those clients is going to take twelve and a half -seconds, which is quite a bit. So we want a shared secret, once set up, to -last for at least ten to twenty minutes or so. We don’t want clients -glibly setting up shared secrets at whim, particularly as this could be a -relatively high cost on the server for a relatively low cost on the -client, since the server has many clients, but the client does not have -many servers. - -We want shared secrets to be long lived enough that the cost in memory is -roughly comparable to the cost in time to set them up. A gigabyte of -shared secrets is probably around ten million shared secrets, so would -take three hours to set up. Therefore, we don’t need to worry about -throwing shared secrets away to save memory – it is far more important to -keep them around to save computation time. This implies a system where we -keep a pile of shared secrets, and the accompanying network addresses in -memory. Hashtable that hashes wallets existing in other processes, to -handles to shared secrets and network addresses of existing in this -process. So each process has the ability to speak to a lot of other -processes cached, and probably has some durable connections to a few other -processes. Which immediately makes us think about flood filling data -through the system without being vulnerable to spam. - -Setting up tcp connections and tearing them down is also costly, but it looks as though, for some reason, existing code can only handle a small number of tcp connections, so they encourage you to cotinually tear them down and recreate them. Maybe we should shut down a tcp connection after eighteen seconds of nonuse. Check them every multiple of 8 seconds past epoch, refrain from reuse twenth four seconds past the epoch, and shut them down altogether after thirty two seconds. (The reason for checking them at certain time since the epoch is that shutdown is apt to go more efficienty if initiated at both ends. - -Which means it would be intolerable to have a shared secret generation in -every UDP packet, or even very many UDP packets, so to prevent DDoS attack, -and just to have efficient communications, have to have a deal where you -cheaply for the server, but potentially expensively for the client, establish -a connection before you construct a shared secret. - -A five hundred and twelve bit hash however takes 1.5 microseconds – which -is cheap. We can use hashes to resist dos attacks, making the client -return to us the state cookie unchanged. If we have a ten megabit -connection, then every packet is roughly the size of a hash, in which case -the hash time is roughly three hundred megabits per second, not that -costly to hash everything. - -How big a hash code do we need to identify the shared secret? Suppose we generate one shared secret every millisecond microseconds. Then thirty two bit hashcodes are going to roll over in forty days. If we have a reasonable timeout on inactive shared secrets, reuse is never going to happen, and if it does happen, the connection fails, Well, connections are always failing for one reason or another, and a connection inappropriately failing is not likely to be very harmful, whereas a connection seemingly succeeding, while both sides make incorrect and different assumptions about it could be very harmful. - -# Message UDP protocol for messages that fit in a single packet - -When I look at [the existing TCP state machine](https://www.ietf.org/rfc/rfc0793.txt), it is hideously -complicated. Why am I thinking of reinventing that? [Syn cookies](http://cr.yp.to/syncookies.html) turn out -to be less tricky than I thought – the server just sends a secret short hash of -the client data and the server response, which the client cannot predict, and -the client response to the server response has to be consistent with that -secret short hash. - -Well, maybe it needs to be that complicated, but I feel it does not. If I find that it really does need to be that complicated, well, then I should not consider re-inventing the wheel. - -Every packet has the source port and the destination port, and in tcp initiation, the client chooses its source port at random (bind with port zero) in order to avoid session hijacking attacks. Range of source ports up to 65535 - -Notice that this gives us $2^{64}$ possible channels, and then on top of that we have the 32 bit sequence number. - -IP eats up twenty bytes, and then the source and destination ports eat four more bytes. I am guessing that NAT just looks at the port numbers and address of outgoing, and then if a packet comes in equivalent incoming, just cheerfully lets it through. TCP and UDP ports look rather similar, every packet has a specific server destination port, and a random client port. Random ports are sometimes restricted to 0xC000-0xFFFF, and sometimes mighty random (starting at 0x0800 and working upwards seems popular) But 0xC000-0xFFFF viewed as a hashcode seems ample scope. Bind for port 0 returns a random port that is not in use, use that as a hashcode. - -The three phase handshake is: - -1. Client: SYN my sequence number is X, my port number is random port A, and your port number is well known port B. -1. Server: ACK/SYN your sequence number is X, my Sequence number is Y, my Port number is well known B, and your port number is random port A. -1. Client: ACK your sequence number is Y, my sequence number is X+1, my port number is random port A, and your port number is well known port B. - -Sequence number is something like your event hashcode – or perhaps event -hashcode for grouped events, with the tcp header being the group. - -Assume the process somehow has an accessible and somehow known open UDP port. -Client low level code somehow can get hold of the process port and IP -address associated with the target elliptic point, by some mechanism we are -not thinking about yet. - -We don’t want the server to be wide open to starting any number of new -shared secrets. Shared secrets are costly enough that we want them to last -as long as cookies. But at the same time, recommended practice is that -ports in use do not last long at all. We also might well want a redirect to -another wallet in the same process on the same server, or a nearby process -on a nearby server. But if so, let us first set up a shared secret that is -associated with the shared secret on this port number, and then we can talk -about shared secrets associated with other port numbers. Life is simpler if -a one to one mapping between access ports and durable public and private keys, -even if behind that port are many durable public and private keys. - -# UDP protocol for potentially big objects - -The tcp protocol can be thought of as the tcp header, which appears in every packet of the stream, being a hashcode event object, and the sequence number, which is distinct and sequential in every packet of the unidirectional stream, being a std:dequeue event object, which fifo queue is associated with hashcode event object. - -This suggests that we handle a group of events, where we want to have an event that fires when all the members of the group have successfully fired, or one of them has unrecoverably failed, with the group being handled as one event by a hashcode event object, and the the members of the group with event objects associated with a fifo queue for the group. - -When a member of the group is triggered, it is added to the queue. When it is fired, it is marked as fired, and if it is the last element of the queue, it is removed from the queue, and if the next element is also marked as fired, that also is removed from the queue, until the last element of the queue is marked as triggered but not yet fired. -In the common case where we have a very large number of members, which are fired in the order, or approximately the order, that they are triggered, this is efficient. When the group event is marked as all elements triggered and all elements fired, and the fifo queue empty then that fires the group event. - -Well, that would be the efficient way to handle things if we were implementing TCP, a potentially infinite stream, all over again, but we are not. - -Rather, we are representing a big object as a stream of objects, and we know the size in advance, so might as well have an array that remains fixed size for the entire lifetime of the group event. The member event identifiers are indexes into this one big fixed size array. - -The event identifier is run time detected as a group event identifier, so it expects its event identifier to be followed by an index into the array, much as the sequence number immediately follows the TCP header. - -I would kind of like to have a [QUIC] protocol eventually, but that can -wait.If we have a UDP protocol, the communicating parties will negotiate a -UDP port that uniquely identifies the processes on both computers. Associated -with this UDP port will be the default public keys and the hash of the -shared secret derived from those public keys, and a default decryption shared -secret. The connection will have a keep alive heartbeat of small packets, -and a data flow of standard sized large packets, each the same size. Each -message will have a sequence number identifying the message, and each UDP -packet of the message will have the sender sequence number of its message, -its position within the message, and, redundantly, the power of two size of -the encrypted message object. Each message object, but not each packet -containing a fragment of the message object, contains the unencrypted hashtag -of the shared secret, the hashtag of the event object of the sender, which -may be null if it is the final message, and, if it is a reply, the hashtag of -event object of the message to which it is a reply, and the position within -the decryption stream as a multiple of the power of two size of the encrypted -message. This data gets converted back into standard message format when it -is taken off the UDP stream. - -Every data packet has a sequence number, and each one gets an ack, though only when the input queue is empty, so several data packets get a group ack. If an ack is not received, the sender sends a nack. If the sender responds with a nack (huh, what packets?) resends the packets. If the sender persistently fails to respond, sending the message object failed, and the connection is shut down. If the sender can respond to nacks, but not to data packets, maybe our data packet size is too big, so we halve it. If that does not work, sending the message object failed, and the connection is shut down. - -[QUIC] streams will be created and shut down fairly often, each time with a new shared secret, and message object reply may well arrive on a new stream distinct from the stream on which it was sent. - -Message objects, other than nacks and acks, intended to manage the UDP stream -are treated like any other message object, passed up to the message layer, -except that their result gets sent back down to the code managing the UDP -stream. A UDP stream is initiated by a regular message object, with its own -data to initiate a shared secret, small enough to fit in a single UDP packet, -it is just that this message object says “prepare the way for bigger message -objects” – the UDP protocol for big message objects is built on top of a UDP -protocol for message objects small enough to fit in a single packet. diff --git a/docs/identity.md b/docs/identity.md deleted file mode 100644 index 8407398..0000000 --- a/docs/identity.md +++ /dev/null @@ -1,1137 +0,0 @@ ---- -title: - Identity -... -# Syntax and semantics of identity - -The problem is, we need a general syntax and semantics to express -identity. - -Our use cases are likely to include a big pile of documents signed by diverse -people, with no contact information, some of them encrypted so that they can -only be read by people with certain private keys, with no indication of the -public key corresponding to that private key. - -So, what is our ascii armoured signature going to look like? - -If we ascii armouring, we are likely signing a utf8 string. Which will be -hashed as a count based string introduced by an arbitrary precision integer -and followed by a null that is not included in the count, even if it is a null -terminated string with no count, or a count based string that normally has no -null terminator. This is to ensure that it is impossible to concoct a -multiple string sequence that will have the same hash for a group of strings -as for a group of strings grouped differently, and so that different computers -with different word lengths and different endianness will generate the same -hash for the same string or sequence of separate strings. - -A sig block consists of: -> `{sig` 252 bitstring as base sixty four characters, arbitrary - sequence of non base sixty four characters, 252 bit bitstring as base sixty - four characters, optional arbitrary sequence of non base sixty four - characters, 256 bit bitstring as base sixty four characters representing - the public key, optionally followed by whitespace or linefeed characters, - followed by arbitrary utf8 characters representing the nickname of that - public key, which must start with a non whitespace character, the name being - followed by `}` and may not contain `}` - - Or alternatively the nickname may be represented by a nickname block - composed of `{nick`, optionally followed by an arbitrary sequence of - bracketing or whitespace or symmetric ascii characters followed by `“`, - followed by the nickname, followed by `”` followed by the reverse sequence, - followed by `}` - -A single signed string may have several different but equivalent ascii armorings: - -* unquoted - - * blank line or start of document. - * `:::` on a line by itself - * string to be signed on a line by itself (or lines by itself if it contains - line feeds)\ - * `:::` followed by sig block. - * blank line - -* quoted - - * blank line or start of document. - * `:::` followed by arbitrary sequence of bracketing or whitespace or symmetric ascii characters followed by `“`, on a line by itself - * string to be signed on a line by itself (or lines by itself if it contains line feeds) - * `”` followed by the reverse arbitrary sequence followed by `:::` followed - by sig block. - * blank line - -* inline unquoted - * `[` string to be signed `]` followed by sig block. - -* inline quoted - * `[` followed by arbitrary sequence of bracketing or symmetric ascii - characters followed by `“`, followed by string to be signed followed by `”` - followed by reverse arbitrary sequence of bracketing or symmetric ascii - characters, followed by`]` followed by sig block. - -A signature represents an identity. If a means of contacting that identity -is to be represented, it will be represented outside of and separately -from that signature. - -Very commonly we want to sign not just one arbitrary string, but an arbitrary -string and or a nickname and or a public key. This is done similarly to the -above, with the public key introduced by a hash sign, and the nickname bare or -in a nick block. - -For example: - -> `:::`\ -> Hi\ -> `:::` John Hancock `#0123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghi` -> `{sig 0123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefgh 0123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefgh}` - -Or: - ->`[`Hi`]` John Hancock `#0123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghi` -> `{sig 0123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefgh 0123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefgh}` - -We cheerfully assume that strings have no semantically and syntactacly -significant characters, and if they do have semantically significant -characters, we bracket the string with angle quotes, or if the string -contains angle quotes, with angle quotes and an arbitrary string of -non significant bracketing, symmetric, or whitespace ascii characters -around the angle quotes. - -`#`, `“`, `”`, `"`, `:::`, `[`, `]`, `{`, and `}` are semantically and -syntactically significant in certain contexts, and to distinguish between -the endless variety of uses to which they will be put, the closing `:::` or -the opening `{` will generally be immediately followed by a short label -identifying the particular use to which it is being put. A starting `:::` is -preceded by a blank line or start of document, and an ending `:::` is -followed by a `{label ...}` identifying the use to which it is put, followed -by a blank line. The ending `:::` is a ternary operator, linking several -fields - -Thus, if someone obnoxiously wanted line feeds, angle quotes, and curly -brackets in nickname, perhaps: - -> John Hancock {prince of “darkness”} - -a signed block of text might then look like: - -> `:::`\ -> Hi\ -> `::: (|“`John Hancock {prince of “darkness”}`”|) #0123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghi` -> `{sig 0123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefgh 0123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefgh}` - -with the `(|“` … `”|)` acting as quote marks that will cause whatever is -inside them to be treated as a single string by the surrounding -representation. To avoid the need for escaping special characters, we allow -an infinite variety of quotation symbols. - -Our goal in breaking from uri syntax, json syntax, and even yaml and markdown -syntax, is a syntax that allows arbitrarily complex cryptographic expressions -to be represented to the end user in a way that is as intuitive as they can be. - -This syntax is inspired by Pandoc markdown, which extended the syntax of -markdown to allow better combination of markdown with html and css, while -still holding on to as much readability as possible. - -Whitespace characters can be liberally inserted and will generally -be ignored, except that they have to be balanced when demarking quotes or -that they part of the string or the nickname. Literal strings will only -occur within these labelled bracketing operators, or within quotes inside -these labelled bracketing operators - -In an environment where lines are represented by something other than line -feed characters, the lines shall be converted into line feed characters before -being hashed and signed, and line feeds in the signed string may be -represented by lines native to the environment when the ascii armored signed -string is displayed in an environment where lines are represented by something -other than line feed characters. - -# Primary and most urgent use case - -Our primary use case, however, is not mere identity, but is a link that will -bring you to a shopping cart, containing a link to a checkout, containing a -link that will say “Your order has been placed” and will generate a record -that the payment has been attempted, containing a link that can check on your -payment status, and generate a review or respond to the vendor, which should -be followed eventually by an email like message from the vendor that your -payment has gone through, also containing a link that could generate a review -and respond to the vendor. - -Our secondary case is signed messages subject to usenet style -authentication and hashtag style pooling, distribution, and grouping -(rather than usenet hierarchical grouping), and to spam filtering by the -distributors (which is necessarily indistinguishable and inseparable -from political filtering, because a lot of political messaging *is* -spam, containing repetitiously similar messages, which messages are apt -to be misleading and manipulative). And with social media style -following, liking, downvoting, and reposting (where a repost should not -result in the same message appearing in someone’s feed multiple times, -but in the feeds of the followers of the reposting party who did not -follow the posting party. A repost should appear with the reposters that -you are following appearing. Friending should be a request to the -client’s server for permission to communicate, but following should -grant permission and capability for one way communication, while -successfully sending a message automatically by default grants -permission to communicate one way, with one way one on one communication -permitting two way one on one communication. Thus mutual following -should by default permit mutual two way one on one communication. - -Mutual following should by default imply friending, and friending should -by default imply mutual following. - -We should also allow exploding messages, which are authenticated by the -client to his server, and authenticated by his server, but you have to -login, perhaps anonymously or under a throwaway identity, to his server -in order to see them – they cannot go in usenet style pooling. - -An identity should be able to function as host and server if the owner -of that identify chooses to set that identity to continually publish its -signed network address (obtained from its host). This enables anyone to -send messages directly to that identity, but normally most such messages -are automatically ignored and discarded. - -Thus if an identity is selling something controversial or arguably -illegal, and is functioning as a server, in order that people can make -purchases, anyone in the world can discover and monitor its network -address. Therefore, purchases have to be possible between client non -host identities, whose network address is not widely known. The -transaction sequence (link, shopping cart, purchase screens) have to be -email like. - -Our primary use case for exploding messages is discussing the -manufacture, sale, and purchase of goods, such as guns, which may -attract attacks. - -In this case you have a non radioactive server carrying signed messages, -some of them mildly radioactive and carrying links to a mildly -radioactive server, carrying highly radioactive messages, some of them -exploding, which contain links to contactable identities through which -radioactive goods and services can be bought and sold, leading to signed -or exploding reviews supporting or detracting from the reputations of -sellers and purchasers of highly radioactive goods. - -This implies a cryptographic resource identifier has names and a public -key, or a blockchain transaction output identifier, but may be useless -without a locator which identifies a host. As a general rule, your host -will know where to find that host. - -Cryptographic resource identifiers should be visible to the human as per -message petnames of the entity embedding the resource identifier, -qualified by the users petname, if any, and the entities nickname, with -the embedding defaulting to the entities nickname, unless the composer -provides a per message petname. - -Because messages are widely distributed, using authentication rather -than signing would not make a difference and would be difficult to -implement. The identities signing therefore would need to be -discardable, but they will need to permit the possibility of one on one -contact, usually through a peer that hosts the client identity, perhaps -directly, if the sender wishes it. If it is a sales message, which is -our primary use case, it will frequently be convenient to be contactable -directly, usually through a named blockchain transaction output, -supported by the pool of signed network addresses associated with -blockchain outputs, which typically contains not the network addresses -of the keys signed by the blockchain key, but the network addresses of -keys signed by that key as having authority to communicate on its -behalf. - -We aim for a pseudonymous currency supporting reputations, not an -anonymous currency. We worry first about identity, and actually locating -that entity is an ad hoc afterthought cobbled on top. We want -cryptographic resource identifiers, which in some cases have the side -effect of locating the resource, in other cases have the effect of -identifying a byte string that has somehow arrived on your computer, or -discovering that you don’t have certain items of evidence that should -have arrived on your computer, but have not, and should be searched for -on the cloud. - -The foundation of our system is probabilistically unique cryptographic -identifiers, twofiftysix bit identifiers. - -# Cannot comply with uri syntax - -I would like to conform to the Uniform Resource Locator and Uniform -Resource Identifier syntax and semantics, but they are too married to -the existing system, and we are going to have subtly different semantics -and much larger semantics. They have reserved no end of characters for -their own syntax, with their own semantics, and expressing complicated -semantics within a syntax designed for different and more restricted -semantics and useful characters reserved that syntax just does not fit -too well. And they never got hip to utf8. And they prohibit spaces. - -There is an obvious problem in permitting unicode in identifiers, since -the confusible identifier problem is insoluble even with ascii only -identifiers, and becomes enormously harder with unicode identifiers. So, -since no end of people have tried unsuccessfully to solve it with ascii -identifiers, we will make absolutely no attempt to solve it. Instead, we -make it more difficult to use confusible identifiers maliciously, and -trust people to do their best to make their identifiers distinctive. - -The usual malicious use of confusible identifiers is a man in the middle -attack: “This is an urgent notice from BigImportantFinancialInstituion. -You need to login immediately or all your money will be frozen or lost.” - -Then you login onto an entity that has a name that can be confused with -that BigImportantFinancialInstitution, give it all the secrets that you -share with the actual BigImportantFinancialInstitution, and lose all -your money. - -Obviously this is not going to work with cryptographic secrets, since -they are unshared. You login with a strong private secret, instead of a -weak shared secret. So we should just go on living with the possibility -and likelihood of confusible identifiers, as we have already been doing -for a very long time, and expect that they will cause far fewer -problems. - -If you a contact a confusible entity, and it looks confusible to your -client program, the interface will detect the incompatibility, and -demand a distinctive petname, rendering it no longer confusible. Thus -confusion becomes a client problem in a small space of names, rather -than a system problem in a very large space of names. People who create -clients will worry about it when and if people who use clients are -bothered by it. Which they probably will not be. We will not worry about -it unless people start demanding a fix. Which is only going to happen if -malicious people find a use for a confusible. Which, if we do other -parts of our job right, is not going to happen. Experience has -demonstrated that it is always easy to maliciously create a confusible -name. The solution is to set things up so that the malicious person can -do little harm with it, so that no one wants to create confusible names. - -Spaces are forbidden in a uri, because one routinely passes uris as -arguments on the command line, and arguments are separated by -whitespace. But lack of spaces is s severe blow to intelligibility, and -in particular a severe blow against [Zooko](./zookos_triangle.html) nicknames. One way around this -rule is to have a rule for interpreting command line arguments that if -an item is contained in angle quotes or brackets, it is one item, and to -have as part of the scheme syntax and schema a rule for interpreting the -object, that if it begins with an angle quote or bracket, it ends at the -matching angle quote or bracket, and similarly for any item within it. -If an operator expects an argument, and there is a bracket or angled -quote mark immediately after it or before it, then everything inside the -brackets to the next matching bracket, is one argument. - -The no spaces rule is a reflection of the widespread use of lexers when -we should be using parsers. Lexers fail to provide sufficient expressive -power. We should break compatibility by demanding that anything that can -handle our expressions uses a command line parser rather than a lexer, -because you are just not going to be able to handle -[Zooko](./zookos_triangle.html) nicknames in a -lexer. - -The uri syntax is written for a lexer, not a parser, and the command -line syntax is written for a lexer. Bad! - -A program that accepts cryptographic identifiers on its command line is -just going to have to parse, rather lex, its command line. Which is a -pain because all the standard libraries for command line handling are -lexers, not parsers. - -In order to allow people to identify themselves and their computers on -the cloud with distinct and intelligible cryptographic resource -identifiers, we are going to assume parsers everywhere and unicode -everywhere. If parsers are far from everywhere, they should be. - -We could easily conform to the uniform resource identifier syntax and -semantics by adding arbitrarily complex syntax and semantics within the -authority field [RFC2396 Section -3.2](https://tools.ietf.org/html/rfc2396). - -Oracle’s way of dealing with this problem was to have a pile of -alphanumeric keywords separated by dots in the authority. And then they -could have all the syntax and semantics they needed on the keywords, -which were lexed by the dot separators within the authority field. This, -of course, resulted in long and unintelligible authority fields full of -verbose boilerplate. For intelligibility, need to have the cryptographic -resource locator look as if it is an unknown scheme that has no -authority component. - -Trouble is that the restricted character set (no spaces, no unicode -beyond ascii seven bit, and restrictions on the use of ascii seven bit -punctuation characters) means that the resulting uris would not be -visually distinctive – it would be hard for end users to make their uri -look different and memorable. One way out of this conundrum is to have -our own non conformant cryptographic resource identifiers, which can be -made visually distinctive, and which can be, if needed, mapped to -indecipherable gibberish that all looks alike using % byte codes (I -refuse to call them octet codes), but which conforms to the permitted -character set within the authority field. - -Html tag restrictions prevent us from using \" or \' marks inside an -attribute (because an attribute may be contained inside \" or \') and -html attributes prohibit newlines, and provide no way of escaping -newlines into an attribute. At least we can escape \" inside an -attribute with `"` and \' and with `&039;`, or with % encoded -octets but this is useless because we don’t want our entities to look -different inside html than they do to the parser. - -[RFC9386 Section 2.2](https://tools.ietf.org/html/rfc3986) effectively -restricts lexing and parsing to ninety three characters, *excluding -spaces* because likely to be in a context that lexes on spaces, \", and -\' and then lexes and parses as one operation by reserving no end of -special characters, forgetting decades of study of the lexing and -parsing problem. - -As they attempt to squeeze more and more semantics into this -impoverished syntax, the syntax and semantics becomes ever more -impoverished. And we plan to permit a whole lot of new semantics into -the authority field. - -The impoverished syntax and semantics led to the triple backslash in the -file scheme. Before they could get to the field with file syntax and -semantics, they had to have an empty field with authority syntax. We -want visually distinctive cryptographic resource identifiers, without -distracting rubbish wrapping them. - -We do not need the // to distinguish the authority field, because we -have several different bases for authority, and need to distinguish them -within the authority field. Which means that anything parsing the -cryptographic resource identifier as uri is going to demand relative url -syntax and character set for the whole thing. - -# [Digression on parsers](./parsers.html) - -There is no real difference between a resource locator and a resource -identifier, because usually a resource identity depends on a name -assigned by some authority, and when you contact that authority, it is -likely to know where the resource is, and a resource locator may well -refer to a resource that cannot be located, resulting in the ubiquitous -403 and 404 messages, thus necessarily have the same syntax and -semantics. So, they are best all called resource identifiers, -particularly as the purpose of cryptography is to identify, not to -locate. - -But still, we are going to have be able to recognize normal uris, so we -will have to start an absolute resource identifier with a scheme, and -might as well end the scheme name with a colon, though we might piss on -them by having our scheme name be a unicode character that is not -permitted, under some circumstances followed by a space which is not -permitted either. - -But, on the other hand, we would like environments that do not know -about cryptographic resource identifiers to at least be able to -recognize it as an unknown scheme, so I guess `rho:` it is, at least as -an option. But after that colon, it is our playing field, and no longer -“universal”. Past the colon, a much bigger syntax than the overly -grandiose “universal” will apply. - -The universal scheme is inherently far from “universal”, because it as -soon as you put anything in a “universal” scheme other than how to -recognize the scheme identifier, it ceases to be universal. - -The “universal” resource identifier scheme contains no end of stuff that -belongs not in the “universal” scheme but in particular schemes. - -The reason for the irritating double backslash is because of protocol -relative authorities. You could potentially leave out the `http:`, in -which case you have to distinguish the authority from a local directory -name. - -The reason for the triple backslash `file:///` makes little sense, it is -there because `file://`(implied authority)`/`. The backslash is there -because of “universal” syntax and semantics that are not applicable to -file systems. Unfortunately, however, while file systems have a subset -of universal semantics, we have a superset. - -In our case, we might reference keys by name anywhere, and they might -have local names or blockchain names. So we could have explicit keys, -keys referenced by local name (which might well be a chain of names -specifying a path through a local tree of bookmarks and contacts) keys -referenced by unspent transaction output sequence number (which are -supposed to generate an exception if already spent) keys referenced by -transaction sequence number (the number is unchanged by spending, but -should not generate an exception if spent) and keys referenced by the -name of the string that they own on the blockchain. And since local and -blockchain names are just strings, you have to identify what the name -refers to. - -So, rather than the // system for designating authorities, we will have -a label, which may be a single reserved character, or a string followed -by a “:” - -We do not want to restrict possible names, since we want maximum -distinctness, intelligibility, and recognizability. We want names to -have available to them spaces and the full scope of characters. - -In order that we can use brackets to denote a string entity containing -terminal that could be interpreted as syntactically significant, rather -than just more string, our syntax will have to denote different kinds of -strings. Obviously a bracket enclosed string can contain operators that, -if interpreted as operators rather than part of the string, produce a -symbol that is not of string type, so if the parser is looking at the -expression to see if it can be a string, can accept symbols as strings -that in some contexts would be operators. - -Any sequence of strings is a string, whose value is that of all the -strings concatenated. - -Any sequence of non whitespace unreserved characters is a string. - -Any whitespace between two strings is a string. - -Reserved characters can be used as part of a string in contexts where, -if interpreted as syntactically significant, the resulting non terminal -would be of unexpected type. - -A [html -entity](https://www.w3.org/wiki/Common_HTML_entities_used_for_typography) -like `'”`' is a single character string, in this case is a string -containing an unbalanced quote mark. - -A unicode entity may also be represented by one or more % encoded bytes -(or as the standards people, who still bow down to the tyranny of punched -card machines long turned to rust call them, “octets”). - -Of course many environments will throw up if you have spaces or unicode -characters within something that looks like a uniform resource -identifier, in which case you can make your string out of html entities -or % encoded bytes corresponding to utf8 encoded characters. It will -look ugly and incomprehensible, impossible to read and very hard to -write, but the ascii armor will protect it in an environment that does -not like non ascii. Tidy throws up at spaces or unicode beyond ascii in -a domain name inside a uri, but is happy with unknown schemes, and happy -with html entities representing non ascii unicode inside a domain name -inside a uri, so you can armor any string into something that tidy will -happily accept as an unknown scheme. But the parser is written for the -twenty first century, and is intended to accept cryptographic resource -identifiers that are human intelligible. We still are under the tyranny -of standards set to accommodate punch card machines and it is long past -time that we broke compatibility with those standards. - -Arbitrary precision integers, arbitrary width windowed integers, public -keys, bitstrings, and the like, are presented by a sequence of base -sixty four digits immediately following, without intervening spaces, -something that the parser knows should be immediately followed by a -public key or whatever. They may also be represented by h% followed by -base hexadecimal digits, b% followed by base two digits, but base sixty -four is the default. An integer must be represented by %d followed by -decimal digits. Thus we can reference names that are distinctive and -unique, and reference them even cryptographic resource identifiers that -have to be transported in text that restricts available characters. - -Relative resource identifiers that may be relative to cryptographic -resource identifier or a universal resource identifiers will have to be -valid. - -The left hand hand part of a cryptographic resource identifier has to be -a scheme, and the right hand part is likely to be a standard relative -uri. The central part however is going to be a cryptographic authority, -and for that we need a much broader syntax than “universal”. - -And that cryptographic authority might well have several hosts with -several temporary subkeys and several network addresses, and several human -agents with several temporary keys and several network addresses. And -each of those network addresses should have the up to date public keys -of all of them. Everyone should be the equivalent of a domain name -server for the groups of which he is part. - -The parser that parses a cryptographic resource identifier is going to -encounter no end of things that require look up over large local -databases, databases in the cloud, and finally, lookup on the target -host. Thus parsing a cryptographic resource identifier tends to be -equivalent to locating the resource. A 403 or 404 is parsing failure -because of undefined symbol. - -If an entity referenced in the identifier has existence, and perhaps -class and parse type, some host, possibly far away, has given it that -existence and parse type then for the parse to succeed, the parser has -to connect to that host. - -The straightforward case of [Zooko](./zookos_triangle.html) cryptographic resource identifiers is -that your equivalent of a domain name is a public key. You look up the -network address and the public key actually used for communication in -the equivalent of the domain name system, and get a public key signed by -the domain name key. Then you communicate with that address, with your -communications being authenticated by that key. There are likely three -destination public keys involved in making the connection: a durable -public key whose corresponding private key is likely not on any -computer, but written down on a page in an old book on the bookshelf of -the rightful owner of that public key, a durable but replaceable key -signed by that master key, and a session key that provides perfect -forward secrecy in that the corresponding secret key is discarded when -the connection is closed. - -> `#4397439879483774378943798` - -Represents the public key of an entity that knows the corresponding -secret key, or a block of data, commonly a widely shared and distributed -block of data, that when hashed according to its schema has that hash. -If a public key, you use it to make an authenticated connection or to -authenticate a signature, if a hash, authenticate data. If you can find -it, you probably already know which it is and what you are going to do -with it. - -If it is a public key, and you somehow locate the network address of -that entity, it will either authenticate itself with that key, or -authenticate itself with a public key signed by that entity authorizing -it to act as agent in some capacity. - -> Bob `#4397439879483774378943798` - -Represents the entity known to `#4397439879483774378943798` as Bob -(though if it is part of signature, it means that that entity knows -itself as Bob, that that is the nickname of the identity that this -key reprsents.) Likely the the entity itself, possibly with a -different public key, possibly an agent authorized to speak and act -on behalf of -`#4397439879483774378943798`, or possibly some random guy on his -friend’s list. If you have contacted `#4397439879483774378943798`, you -have likely contacted `Bob`, and if not `#4397439879483774378943798` may -be of help in contacting him. If the unlikely event that the entity has -a Bob on its contact list, and a different Bob on its authorized agent -list, this will bring up the agent. - -> Bob `#4397439879483774378943798` - -The entity known to `#4397439879483774378943798` as “Bob”, but not -authorized act on behalf of `#4397439879483774378943798`. For another -entity, Bob is probably a different Bob, and the same Bob probably has a -different name at that entity. - -> Receivables `#4397439879483774378943798` - -The entity known to `#4397439879483774378943798` as “Receivables”, and -authorized to act for that entity in some role. Possibly the entity -itself. - - > `#4397439879483774378943798/foo` - -A data object on the computer that identifies itself on the network with -this public key, or a public key authorized by this public key. The data -object is typically itself a name table, hence -`#4397439879483774378943798/foo/bar/program_name/arbitrary data for program.` - -uris have tied themselves in knots distinguishing between a file, and -data passed to the program represented by that file to execute. Probably -better to just say that anything to the right of the slash is entirely -up to entity to the immediate left of the slash to interpret, and if it -contains spaces and suchlike, use windows command line string -representation rules, quote marks and escape codes. - - rho:#4397439879483774378943798 - rho:Bob#4397439879483774378943798 - Bob@#4397439879483774378943798 - Receivables.#4397439879483774378943798 - -fit into the Uniform Resource Identifier scheme, poorly. - - #4397439879483774378943798/foo - -fits into the catchall leftover part of the Uniform Resource Identifier -scheme. - - rho:Bob@Carol.Dave#4397439879483774378943798/foo - -Does not fit into it in the slightest, and I think the idea of -compatibility with the URN system is a lost cause. - -But public keys are non memorable and difficult to humanly distinguish. -We need Zooko’s quadrangle. - -# Zooko’s Quadrangle - -Obviously you need to be able to reference human readable names on the -blockchain, which is the fourth corner of [Zooko’s triangle]. - -[Zooko’s triangle] : ./zookos_triangle.html - -# Location - -We want any identity to be able to send end to end encrypted messages to -any other identity, but we don’t want ten thousand scammers to be able -to spam an identity of which they know nothing other than that he can -send money over the internet, nor do we want to allow distributed denial -of service attacks on ordinary users (big peers can take care of -themselves). - -Universal Resource locators, urls, are not semantically or syntactically -distinct from Uniform Resource Identifiers that are wrapped around -methods for finding stuff on the internet, and methods for finding stuff -on the internet are wrapped around Uniform Resource Identifiers, but -this was designed for a smaller and more trusting world. Today, the -problem is not finding data, but rather preventing hostile and ill -willed people from finding data and from interfering with communication -by supplying falsified data So we need names that are rooted on -cryptographic foundations. - -If you have a system of naming that can securely identify that you are -getting the right connection and the right data, you can wrap location -around it and attach location to it ad hoc. SQL is a language for -generating on the fly ad hoc efficient methods for accessing data that -is specified in ways that bear a very indirect relationship to its -location. - -The equivalent of a web page in the cloud obviously has to have a -globally unique human readable name, being the website page and the -identifier on the website, which is associated with an network address -that hostile parties can find, to mount DDoS attack or sent the cops -around, and associated with public key, or chain of public keys. But we -do not want the equivalent capability to send messages to humans, or to -find them. Yet we want everyone to be able to talk privately to -everyone. - -Suppose someone is publishing information which he wants widely known. -He wants to cooperate with people who will cooperate with them, supply -them with good information that is arguably grounds to act in his -interest and their own. Well, often unpleasant people, or the -government, do not want that to happen. For example people cooperating -to build guns, and providing information on building guns. The -government might well prefer that people do not have guns. Or a wealthy -Chinese man wants to use the Chinese diaspora to move his assets to -where the party cannot get at those assets. Or someone simply has money, -and other people are hoping to scam him into giving them some, or give -him a hard time so that he pays them to go away, or just hate him -because he has money. Then he likely wants to pay for and operate the -server publishing that information disconnected from his tax number, his -real address, and a face that can be beaten in. So he wants the server -identity and network address widely known, but he does not want the -network address through which he makes payments for that server known to -anyone except the people he is paying for the server. - -In order to send a message directly to an identity, you are going to -need its network address (which might be intermediated by a peer) but we -don’t want to make the network address of every identity public. -Sometimes, often, the network address associated with an identity needs -to be a narrowly shared secret. - -We can fix distributed denial of service attacks by inserting a proof of -work demand in the first syn-ack of the three way connection setup -handshake (syn, ack-syn, ack), and the work has to be supplied in the -ack, before the server allocates any memory or performs any expensive -asymmetric cryptography operations. - -Most of the time a server is responding with a message that is intended -for human attention. This is OK, because client requesting the message -is under human control, so the human initiates. - -If the server could send web pages uninvited, that would be extremely -bad. - -So, should not be able to get a human readable message unless it is a -response to something, such as another human readable message. - -So how do we get things started? - -Everything is message/response, which makes a stream nature of TCP a bad -idea. And, being engineers, we are apt to break things down to messages, -which presupposes we can send any message to anyone in isolation, but a -message has to happen in a connection, and a connection may require a -relationship. - -Once we have a relationship, we can figure out a connection within this -relationship, and once we have a connection in the context of this -relationship (which may be many to many), then we can conceptualize -messages as isolated units within the context of that connection, within -the context of that relationship. - -We want groups of people to be able to securely communicate, which the -Jami UI does: Name of room is weakly durable shared secret – but the -people have to use this secret to find each other, which means that -their meeting point location has to be a publicly known network address, -and their meeting point location then knows all of their network -addresses. But UI for this is not urgent. Rather, we want buyers to be -able to find sellers, and buyers and sellers to acquire reputation. - -For reviews, creating reputation, we are going to need usenet like -distribution. For conversations regarding the transaction, email like -distribution. The client logs in with the people he is transacting with -to get and receive personal messages regarding the good or service, or -he logs in with his message server from time to time to get or receive -messages. This limits the number of people that can see the connection -between his network address and his public key. - -# Implementation - -signed -: anyone can check that some data is signed by key, and such data can - be passed around in a pool, usenet style. - -authenticated -: You got the data directly from an entity that has the key. You know - it came from that key, but cannot prove it to anyone else. - -access -: A key with authorization from another key does something. - -authorization -: a key is given authorization by a key with authority. - -authority -: A key with authority can give other keys authorization. Every key - has unlimited authority to do whatever it wants on its own - computers, and with its own reputation. It may grant other keys - authorization to access certain services on its computers and to - perform certain acts in the name of its reputation. - -We do not want the key on the server to be the master key that owns the -server name, because keys on servers are too easily stolen. So we want -it to be a key granted authority by the key that owns the server name. - -There are no end of cases that we will eventually need to handle where -one key grants some authority to another key, so we need a general -mechanism and general format for this, with the particular cases we are -now implementing being particular cases of this general mechanism and -general format. - -We will have authority to respond to automatic and anonymous queries, -analogous to hitting a web page, authority to receive crypto currency, -authority to promise goods and services in return for crypto currency -(which authorities will often belong to different keys), authority to -receive messages intended for human consumption, and authority to -authenticate messages from a human identity (which will typically belong -to the same key) - -And a data structure that associates a network address or rendezvous -server with a key, which data structure may be widely distributed, or -narrowly distributed. - -A data structure corresponds to a record or structure of records in a -database, we are talking about a way of synchronizing databases, and all -databases start as a human readable human writable text file. So, we -need a network database that has public information, and network -database that has private information. - -So, we need a collection of data akin to - -`/etc/hosts` -: public data, the broad consensus, agreed data known to the wider - community. - -`~/.ssh/known_hosts` -: privately known data about the community that cannot be widely - shared because others might not trust it, and you might not trust - others. You may want to share this with those you trust, and get it - from those you trust, but your set of people that you trust is - unlikely to agree with someone else’s and needs to be curated by a - human. - -`~/.ssh/config` -: And there is data you want to keep secret. - -Public data has to rest on a foundation of idiosyncratic data, which has -to rest on a foundation of secret data. If you do it the other way -around, as with peer review, then public data gets manipulated for -hostile purposes, as with certificate authorities and domain name -service. - -So we are going to start with idiosyncratic and narrowly shared contact -information in private databases. The typical operation is look up a -petname (guaranteed locally unique) find the controlling key for that -petname (probabilistically unique) and find a publicly accessible key -(probabilistically unique) which has the petname key as its root master -key. - -## Format for a key granting authority to a key - -### types - -We have, as always everywhere, additive and multiplicative types, with no general and universal way of ascertaining the type of data. You generally know from context. After all, if you know where to find the data, you probably know what the type is. - -Any universal type, or universal way of discovering the type, is apt to turn into a constraint, and people find, as with json, clever ways of working around that constraint, which break everything - -But situations do arise where one has to discover the type of an opaque record. - -In general, the type of value represented by a hash may be unknown. A communication channel is an indefinitely long sequence of records of unknown additive type. A file is a large object of unknown type, that is apt to be part of a big pile of objects of miscellaneous type. - -To open a connection, need protocol negotiation. - -Unix originally planned to have executable files identified in the directory field, but found it had far too many types of executable file, and started to supplement this with the file header. - -zip files, tar files, and tar.gz files are in unix identified to the user by the file name, but various utilities by the file header. - -So, we have a type that identifies type, and we have a type that consists of such an identifier, followed by the object so identified - which is the equivalent of unix's file header. - -We need a name for such a data structure It is not a datum, it is a way -of making data out of an unending string of bytes, it is a way of -identifying the data that bytes represent. It is a blob (Binary Large -OBject) beginning with a schema identifier. It is record with schema id. -It is a typed object. Having trouble naming it. - -OK, we will call a schema identifier with the fields it identifies -following a schere (SCHema REcord). It is a schema attached to the -record whose fields are implied by the schema. It can contain strings, -arbitrary precision integers, hashes, elliptic points, elliptic scalars, -and other scheres. A collection of scheres can only be parsed from left -to right, because if we do not know what to expect, we cannot know where -one ends and the next begins. - - -Well, that is OK for objects at rest, but protocol negotiation is a different animal - -We may well in future want to implement an extensive capability system, -so we have to have a data format that gives room for future extensions -to do all sorts of unforeseeable things – for example one key might want -to issue a transferable time limited right to access a particular file -to another key. - -A byte stream preceded by schema identifier that defines what those -bytes mean is not a byte string. It is a string of integers, scalars, -points, hashes, strings, and whatnot. The schema identifier tells us -what fields follow, thereby enabling us to parse them out an otherwise -endless stream of otherwise meaningless data. - -When one is dealing with scheres in C++ one is probably going to have a -pointer that could point to a variety of different structs, (additive type that -points to a variety of multiplicative types) which you would represent in -C++ as an `std::variant type`. - -Which implies that the party writing and the party reading have to have -agreement on the mapping between schemas and the integers identifying -schemas. When two people share data, the protocol they agree upon -implies a complete set of integers identifiers of a set of schemas, -which means we will frequently wind up issuing new protocols, and wind -up with a fair bit of protocol negotiation. - -Suppose we have a file just sitting by itself, not part of a connection. -Well, it just has to start with the schema id that identifies the -protocol it would have if it was part of a connection – so a protocol id -is itself a schema id, which like any schema id gives the mapping of the -schema ids whose scheres it contains. - -### Finding the network address of a master key, and the authority of the subkey that represents that master key at that network address - -Suppose someone has a high value reputation, and he wants anyone -anywhere to be able to connect to his server, and obtain an -authenticated connection, a connection that the person connecting knows -is controlled by the entity that has this reputation. - -He has the master secret corresponding to the public key connected to this -reputation, but because it is a high value secret, it is not on any -computer anywhere. It is written in the margin of a page in a bible kept -on his bookshelf, so cannot be used to authenticate the connection. - -His secret master key is used to sign another a public subkey, which resides on a -little used computer seldom if ever connected to the network, which -signs another public key on a computer generally connected to the -internet, which signs the public keys on his servers. When a client -computer connects to one of his servers, and asks for a connection -authenticated with this reputation, the client gets a connection -authenticated by a key signed by a key signed by a key signed by the key -whose secret is no longer in any computer, but is in the margin of a -bible on someone’s bookshelf. - -If the client computer has no copy of a certificate signed by this -master key, it cannot connect because it does not know the network -address. If it has a copy that testifies to the network address, but the -certificate is timed out or nonexistent, it asks for a connection -authenticated by the master key, and then, if it does not get a -connection authenticated by the master key, gets a certificate -authenticating a key communicated over the initial encrypted but -unauthenticated connection, and forms a connection authenticated by the -certified key. If it does not get either a certificate and -authentication by the certified key or authentication by the master key, -the connection fails, and it goes looking for a certificate. - -But how does the entity that signs its network address know what its -network address is? - -If the master contacts it over the internet, no problem. The master -knows network address at which he contacted it and has authority to tell -it – but that might be a merely local network address. How does random -computer find its network address? - -The master on his client that he controls probably knows the network -address of the server that he also controls, that being how he likely -controls it, but this is not guaranteed to also be the situation. - -If an entity wants to sign a network address so that others can contact -it, it is publishing its network address into a pool of network -addresses, so it already has other entities it can talk to and ask -“What is my IP?” So it tries some entities at random, asks for its IP -over the authenticated connection, and asks them to to open a connection -on its port number to that IP. - -Its prior is a high probability that they will all give the same answer -and few if any give a different answer, and a prior that all of them -will fail to call back or most of them will callback. It updates its -priors after each call, until it has a high probability that the -majority agree on its IP, and it is definitely contactable at this port, -or uncontactable at this port. - -## Finally, the format - -After long, long, long, discussion on the requirements for a format and -the meaning of the format: - -A signature consists of a schema id for signatures, followed by an -arbitrary schere, followed by the public key, and the two elliptic -scalars that form the Schnorr signature. The null schere, whose schema -id is zero, is permissible, in which case the signature is a proof that -the secret corresponding to this public key is known, which matters if -it is used in a multisignature. - -We also have schema ids for multisignatures – one for signature with two -keys with two distinct roles, and one for a variable number of keys with -symmetric and equivalent roles. - -When a schere is signed, it is public and goes into public shared data. -When it is merely communicated over an authenticated channel, the -recipient handles it as if signed by the sender, but cannot prove to -anyone else it originated from the sender, so stores the same data in -its private database, rather than its public database. - -Public information on network addresses and key authorities will be -signed, and that is signed implies it is public and should be widely -distributed. But often we do not want this information distributed, in -which case these data structures should be authenticated but not signed, -in which case it gets stored in the wallet and is not routinely and -automatically distributed. - -We will by default couple information on network addresses to -information on key authorization, so that we do not run into the DNS -fake network address problem (lack of certification) nor the CA key -repudiation problem (failure to get up to date certificates). - -But if authorization is distributed with network addresses, rather than -being provided by the authorized key on request, then the authorization -has to be signed by both the key authorized, and the key doing the -authorizing. We don’t want random scammers to be able to claim to be -the real power behind the power, neither do we want captured websites -hanging on to authorizations that have been obsoleted because of -capture. - -A network address will be signed by the key whose network address it is, -not by the key granting it authority. A key needs no authorization to -tell us where it can be contacted, and any key giving us the network -address of some other key would need evidence of authority to do so, -which would get us deep in the weeds. - -However, because the final link in the chain is jointly signed, it may -contain the network address as well. Or we may two scheres, one signed -by the last vertex in the chain giving authorization, and one signed -only by the final leaf giving the network address. The former would be -best for stable network addresses, the latter best for unstable network -addresses. Both should be permissible. - -When the network address changes, the key probably changes, and vice -versa, so they should usually and normally be distributed together. Or -maybe no contact information at all, implying that the owner of the -master key only wants to be contacted by people who have received -contact information by another, less widely shared channel. Maybe, as -with product or supplier reviews, he wants everyone to be able to check -his signature on data widely redistributed by someone else, but does not -want everyone who reads that widely redistributed data to be able to -send him messages. - -OK, the arbitrary signed schere in the case we are here discussing is an -authentication schere. Which is only meaningful as part of a signature, -so will only ever be hashed as part of a full authentication, so has no -hash rule as an isolated schere, so we can re-use schema ids, -authentication data being useless without being contained within a -signature. - -The authentication schema consists of the key that is being granted -authority, an arbitrary precision integer that represents a set of -flags, an authentication date as a multiple of 256 seconds which -signifies that the authentication supersedes all earlier -authentications, and also that the authentication does not take effect -until the indicated date. (If you have a bunch of servers with a bunch -of keys all representing one master key, you have to re-authenticate all -of them with the same start date), and an end date to encourage people -to reauthenticate every now and then. An end date earlier than the start -date is invalid, shall be rejected, and have no effect, except that an -end date of zero indicates the authentication remains valid till -superseded, till the client sees a certificate with a later start date. -A certificate with an end date some astronomical time into the future -may be rejected, or may be silently discarded and have no effect. Use an -end date of zero to represent “never expires”. A certificate with a -start date some unreasonable time into the future will not spread -through the network till its start date draws nigh. - -A key that has been authenticated has authority to grant the same -authority or a subset of the flags that have been set to it for another -key, with an end date less than or equal to its end date, and a start -date greater than or equal to its start date. - -0. Bit revokes all other keys, signifies that this chain of -authorizations invalidates all previous chains of authorizations from -the same root, where one chain is previous to the other if the start -times in its chain, considered as a Dewey number, are earlier than the -other. A chain that is identical except for the times gets invalidated -anyway, but this bit is a key revocation bit – when you don’t want some -*other* key trusted any more. Most of the time there will be one and -only one valid key chain for one root, and most of the time this bit -will be set. - -0. Bit indicates that this key may be used to authenticate a connection -as under the control of the entity at the root of the chain. -This subkey can do anything the master key can do, -except extend its timeout, or create subsubkeys with a timeout -beyond its own (the equivalent of gpg subkey, can sign, -authenticate, accept payment, whatever.) - -0. Bit is the signing bit. It indicates that this key can sign data on -behalf of this master key, sign as the entity at the root of the chain. -One typically signs data that will be delivered to the recipient -through an untrusted intermediary, as for example, downloading a -rhocoin wallet, or a peer making an assertion about the most recent -root or block of the blockchain. - -0. Bit indicates that this key may be used to make an offer in the -identity of the entity at the root of the chain. - -0. Bit indicates that this key may be used to accept crypto currency as -the entity at the root of the chain. An offer is likely to be made the -contactable key at the leaf of the chain which has no authority to -accept payment, which requests payment to an uncontactable key -closer to the root of the chain which does have authority to accept -payment. A payment request identifies the rhocoin public keys it -wants by an elliptic scalar, and the public key of the rhocoin is the -accepting key multiplied by that scalar. The payer therefore has -proof he paid that entity and is owed something, even if the money -goes astray. - -0. Bit indicates that more authorities follow, in the form of an ordered -sequence of arbitrary precision integers, terminated by zero, thus enabling -people to roll their own authorities, ad hoc. - -For authorities whose number is less than 64, the bitstring representation -and the list of integers representation are equivalent - we may provide a -long bit mask, or a zero terminated list of integers. Some -implementations may refuse to accept bitstrings longer than 64 bits, -generating a bad data exception. - -We will have a variety of contact information schemas. The contact -information needs to be signed by the key to be contacted, but that -turns out to have surprisingly messy logistics in getting things -rolling. When you are setting up a server, you want to both grant -authority to its secret key (which is what “Lets Encrypt” does), and -publish its network address which is what the DNS does. With “lets -encrypt” you publish the network address insecurely, then “lets -encrypt” insecurely finds a host key claiming the name at that network -address. Which system works only because of the good behavior of the -centralized authority authorizing domain name service. - -The entity that has the master secret somehow controls both machines. -That is, after all, what we want to prove to third parties. To generate -the necessary certificates, the machine without the master secret has to -have a connection to the machine that has the master secret. One -initiates a connection to the other, then they generate the necessary -certificates. The master at that point learns the slave public key and -proves to himself that network address works. But the machine with the -key does not necessarily know its own network address. The party -authorizing the key of machine being authorized *does* know it, and the -two machines can trust each other because under the control of the same -party. - -Whenever two parties communicate, they can verify the external network -addresses associated with the other’s key, but not the external network -address associated with their own key. And if we are talking -authorization, we have a trust relationship that can be used to prove -the network address to key at that network address. To avoid the key -revocation problem, it is easier and safer if network address -information is distributed with authorization information. - -The key actually used is authenticated by the master key through a chain -of certificates, which are normally gathered together in yet another -higher level schere, which to be valid must have a valid link in each -link in the chain. This higher level chain contains all the signatures, -plus network address information, but the individual links in the chain, -and the signed network address, are independently valid, and do not have -to be actually contiguous and in order in a chain to be useful. The -higher level schere is useful merely because it is sometimes convenient -to pack related data as one big ordered bundle, a pile of facts only -useful because they prove one fact, the beginning and end of the chain. - -If someone is relying on a chain of authorities, any key in that chain -can sign network address for itself or any descendants in the chain. But -this requires yet another schema, which should combine a grant of -authority with a network address. diff --git a/docs/libraries.md b/docs/libraries.md index ffaac06..b0f0981 100644 --- a/docs/libraries.md +++ b/docs/libraries.md @@ -46,6 +46,31 @@ off. If someone is on his buddy list, people whitelisted the global consensus name is turned off, unless it is the same, in which case it is turned on, and if the consensus changes, the end user sees that change. +# Existing cryptographic social software + +Maverick says: + +[Manyverse]:https://www.manyver.se/ +{target="_blank"} + +[Scuttlebutt]:https://staltz.com/an-off-grid-social-network.html +{target="_blank"} + +If looking for something to try out that begins to have the right shape, see [Manyverse] that uses the [Scuttlebutt] protocol. Jim is fond of Bitmessage and it is quite secure, but it has a big weakness in that it needs to flood every message to all nodes in the world so everyone can try their private key to see if it works for decryption. That won't scale. (Can't avoid passing every message on to all callers even if you know it's yours, as you don't want to let someone snooping see that you absorbed a message which is an information disclosure.) + +Instead [Manyverse] and [Scuttlebutt] allow for publishing and reputation for a public key. The world can see what a key (its author really) publishes. Can publish public posts like a blog, signed by the private key for authenticity and verifiability. Also can publish private messages (DMs), visible to the world but whose contents are encrypted. Weakness in private messages is that the recipient public key is visible on the message - good for routing and avoiding testing every message, bad for privacy. Would be better to have a 3rd mode, private for "someone" but you have to test your private key to see if it's for you. That should not be hard to add to [Scuttlebutt] and [Manyverse]. + +Reputation is similar to what Jim has proposed: You can specify a list of primary keys/people (friends) you want to listen to and watch. You can also in the interface specify how many degrees of separation you want to see outward from your friends - the public messages from their friends and friends' friends. Presumably specifying 6 degrees gets you Kevin Bacon and the rest of the [Manyverse]. You can also block friends' friends so their connections are not visible to you - so if you don't like a friend's friend you at least don't have to listen any more. + +Another advantage is that [Manyverse] works in a sometimes-connected universe: Turn off your computer or phone for days, turn back on, catch up to messages. You realy don't even have to be on the public Internet, you could sneakernet or local/private-net messages which is nice for, say, messaging in a disaster or SHTF scenario where you have a local wifi network while the main network connections are down. Bitmessage has a decay/lifetime for messages that means you need to be connected at least every 2-3 days. + +Biggest weakness is hosting. Your service can be hosted by 3rd parties like any service, and you can host your own. Given the legal landscape as well as susceptibility to censorship via DDoS and hack attacks, you want to have your own server. There are some public servers but sensibly they don't want a rando or glowie from the net jumping on there to drop dank memes. But hosting is nontrivial to carve out your own network bubble that can see the Internet (at least periodically) while being fully patched and DDoS resistant. + +Of course missing from this from Jim's long list of plans are DDoS protection, a name service that provides name mapping to key hierarchies for messaging and direct communications, and a coin tie-in. But [Manyverse] at least has the right shape for passing someone a message with a payment inside, while using a distributed network and sometimes connection with store-and-forward to let you avoid censorship-as-network-damage. A sovereign corporation can also message publicly or privately using its own sovereign name and key hierarchy and private ledger-coin. + +The net is vast and deep. Maybe we need to start cobbling these pieces together. The era of centralized censorship needs to end. Musk will likely lose either way, and he's only one man against the might of so many paper tigers that happen to be winning the information war. + + # Consensus I have no end of smart ideas about how a blockchain should work, but no @@ -283,12 +308,28 @@ is a much worse idea – it is the usual “embrace and extend” evil plot by Microsoft against open source software, considerably less competently executed than in the past. -## The standard gnu installer +## The standard gnu installer from source ```bash ./configure && make && make install ``` +## The standard cmake installer from source + +```bash +cmake .. && cmake --build && make && make install +``` + +To support this on linux, Cmakelists.txt needs to contain + +```default +project (Test) +add_executable(test main.cpp) +install(TARGETS test) +``` + +On linux, `install(TARGETS test)` is equivalent to `install(TARGETS test DESTINATION bin)` + ## The standard Linux installer `*.deb` @@ -314,50 +355,276 @@ But other systems like a `*.rpm` package, which is built by `git-buildpackage-rp But desktop integration is kind of random. -Under Mate and KDE Plasma, run-on-login is done by placing your -`*.desktop` file in `~/.config/autostart` +Under Mate and KDE Plasma, bitcoin implements run-on-login by generating a +`bitcoin.desktop` file and writing it into `~/.config/autostart` + +It does not, however, place the `bitcoin.desktop` file in any of the +expected other places. Should be in `/usr/share/applications` + +The wasabi desktop file cat `/usr/share/applications/wassabee.desktop` is + +```config +[Desktop Entry] +Type=Application +Name=Wasabi Wallet +StartupWMClass=Wasabi Wallet +GenericName=Bitcoin Wallet +Comment=Privacy focused Bitcoin wallet. +Icon=wassabee +Terminal=false +Exec=wassabee +Categories=Office;Finance; +Keywords=bitcoin;wallet;crypto;blockchain;wasabi;privacy;anon;awesome;qwe;asd; +``` To be in the menus for all users, should be in -`/usr/share/applications` with its `Categories=` entry set to whatever Wasabi sets its `Categories=` entry to. But what about the -menu for just one user? +`/usr/share/applications` with its `Categories=` entry set appropriately. Wasabi appears in the category `Office` on mate. + +But what about the menu for just one user? The documentation says `~/.local/share/applications`. Which I do not entirely trust. +### autotools + +Has a poorly documented and unexplained pipeline to `*.deb` files. +Plausibly `cmake` also has a pipeline, but I have not found it. + +autotools is linux standard, is said to have a straightforward pipeline +into making `*.deb` files, and everyone uses it, including most of your +libraries, but I hear it cursed as a complex mess, and no one wants to +get into it. They find the far from easy `cmake` easier. And `cmake` +runs on all systems, while autotools only runs on linux. + +I believe `cmake` has a straightforward pipeline into `*.deb` files, but if it has, the autotools pipleline is far more common and widely used. + ## The standard windows installer -Wix creating an `*.msi` file. +Requires an `*.msi` file. If the install is something other than an msi +file, it is broken. -Which `*.msi` file can be wrapped in an executable, but there is no sane -reason for this and you are likely to wind up with installs that consist of an -executable that wraps an msi that wraps an executable that wraps an msi. +[Help Desk Geek reviews tools for creating `*.msi`]: https://helpdeskgeek.com/free-tools-review/4-tools-to-create-windows-installer-packages/ +{target="_blank"} -To build an `*.msi`, you need to download the Wix toolset, which is referenced in the relevant Visual Studio extensions, but cannot be downloaded from within the Visual Studio extension manager. +[Help Desk Geek reviews tools for creating `*.msi`] -The Wix Toolset, however, requires the net framework in order to install it -and use it, which is the cobbler’s children going barefoot. You want a -banana, and have to install a banana tree, a monkey, and a jungle. +1. First and formost, Nullsoft Scriptable Install System (NSIS) Small, simple, and powerful. -There is a [good web page](https://stackoverflow.com/questions/1042566/how-can-i-create-an-msi-setup) on WIX resources +1. Last and least Wix and Wax: it requires the biggest learning + curve. You can create some very complex installers with it, but you’ll be coding quite a bit and using a command line often.\ + And word on the internet is that complex installs created with + Wix and Wax create endless headaches and even if you get it + working in your unit test environment, it then breaks your + customer's machine irreversibly and no one can figure out why. -There is an automatic wix setup: Visual Studio-> Tools-> Extensions&updates ->search Visual Studio Installer Projects +### [NSIS] Nullsoft Scriptable Install System -Which is the Microsoft utility for building wix files. It creates a quite adequate wix setup by gui, in the spirit of the skeleton windows gui app. +NSIS can create msi files for windows, and is open source. [NSIS]:https://nsis.sourceforge.io/Download -{target="blank"} +{target="_blank"} -## [NSIS] Nullsoft Scriptable Install System +[NSIS Open Source repository]:https://sourceforge.net/projects/nsis/files/NSIS%203/3.08/RELEASE.html/view +{target="_blank"} + +[NSIS Open Source repository] People who know what they are doing seem to use this open source install system, and they write nice installs with it. -But NSIS has not had any releases since 2019, and it looks that -updating has been minimal for several years before that. +Unlike `Wix`, I hear no whining that any attempt to use its power will +leave you buggered and hopeless. -# Package managers +When I most recently checked, the most recent release was thirty +five days previous, which is moderately impressive, given that their +release process is somewhat painful and arduous. -Lately, however, package managers have appeared: Conan and [vcPkg](https://blog.kitware.com/vcpkg-a-tool-to-build-open-source-libraries-on-windows/). Conan lacks wxWidgets, and has far fewer packages than [vcpkg](https://libraries.io/github/Microsoft/vcpkg). +### Wix + +`Wix` is suffering from bitrot. The wix toolset relies on a framework +that is no longer default installed on windows, and has not been for +a very very long time. + +But no end of people say that sucky though it is, it is the standard +way to create install files. + +[Hello World for Wix]:https://stackoverflow.com/questions/47970743/wix-installer-msi-not-installing-the-winform-app-created-with-visual-studio-2017/47972615#47972615 +{target="_blank"} + +[Hello World for Wix] is startling nontrivial. It does not default create +a minimal useful install for you. So even if you get it working, still +looks like it is broken. + +[Common Design Flaws]:https://stackoverflow.com/questions/45840086/how-do-i-avoid-common-design-flaws-in-my-wix-msi-deployment-solution +{target="_blank"} + +[Common Design Flaws] do not sound entirely like design flaws. It +sounds like it is easy to create `*.msi` files whose behaviour is +complex, unpredictable, unexpected, and apt to vary according to +circumstances on the target machine in incomprehensible and +unexpected ways. "Works great when we test it. Passes unit test." + +[Some practical Wix advice]:https://stackoverflow.com/questions/6060281/windows-installer-and-the-creation-of-wix/12101548#12101548 +{target="_blank"} + +[Some practical Wix advice] advises that trying to do anything +complicated on Wix is hell on wheels, and will lead to unending +broken installs out in the field that fuck over the target systems. + +While Wix in theory permits arbitrarily complex and powerful +installs, in practice, no one succeeds. + +"certain things are still coded on a case by case basis. These ad hoc +solutions are implemented as 'custom actions` in Windows Installer," + +And custom actions that involve writing anything other than file +properties, die horribly. + +Attempts to install Wix on Visual Studio repeatedly failed, and +sometimes trashed my Visual Studio installation. + +After irreversibly destroying Visual Studio far too many times, +attempted to install on a fresh clean virtual machine. + +Clean install of Visual Studio on a vm worked, loaded my project, +compiled and built it almost as fast as my real machine. The +program it built ran fine and passed unit test. And then Visual +Studio crashed on close. Investigating the hung Visual Studio, it had +freed up almost all memory, and then just stopped running. Maybe +the problem is not Wix bitrot, but Visual Studio bitrot, since I did +not even get as far as trying to install Wix. + +If the Wix installer is horribly broken, is it not likely that any install +created by Wix will be horribly broken? + +The Wix Toolset, requires the net framework 3.5 in order to install it +and use it, which is the cobbler’s children going barefoot. You want +a banana, and have to install a banana tree, a monkey, and a jungle. + +Network Framework 3.5.1 can be installed with Control +Panel/programs and programs/features. + +You have to install the extension after the framework in that order, +or else everything breaks. Or maybe everything just breaks anyway +far too often and people develop superstitions about how to avoid +such cases. + +## Choco + +Choco, Chocolatey, is the Windows Package manager system. Does not use `*.msi` as its packaging system. A chocolatey package consists of an `*.nuget`, `chocolateyInstall.ps1`, `chocolateyUninstall.ps1`, and `chocolateyBeforeModify.ps1` (the latter script is run before upgrade or uninstall, and is to reverse stuff done by is accompanying +`chocolateyInstall.ps1 `) + +Interaction with stuff installed by `*.msi` is apt to be bad. + +The community distribution redirects requests to particular servers, +which have to be maintained by particular people - which requires +an 8GB ram, 50GB disk Windows server. I could have `nginx` in the +cloud reverse proxying that to a physically local server over +wireguard, which solves the certificate problem, or I could use a +commercial service, which is cheap, but leaks identity all over the +place and is likely to be subject to hostile interdiction and state sponsored identity theft. + +Getting on the `choco` list is largely automatic. Your package has to +install on their standard image, which is a deliberately obsolete +2012 windows server - and your install script may have to install +windows update packages. Your package is unlikely to successfully +install until you have first tested it on an imitation of their test +environment, which is a great deal of work and skill to set up. +Human curation exists, but is normally routine and superficial. +Installs, has license, done. + +[whole lot more checks]:https://docs.chocolatey.org/en-us/information/security#chocolatey.org-packages +{target="_blank"} + +[whole lot more rules]:https://docs.chocolatey.org/en-us/community-repository/moderation/package-validator/rules/ +{target="_blank"} + +Well, actually there are a [whole lot more checks], which enforce a [whole lot more rules], sixty eight rules and growing, but they are robotically checked and the outcome reported to human. If the robot OKs it, it normally goes through automatically into the community distribution. + +A Choco package is immutable. Can be superseded, but cannot +change. Could have the program check for a Zooko signature of its package file against a list, and look for indications of broad +approval, thus solving the identity problem and eating my own dogfood. + +Choco packages would be very handy to automatically install my build environment. +### Cmake + +`cmake` has a pipeline for building choco files. + +[wxWidgets has instructions for building with Cmake]:https://docs.wxwidgets.org/trunk/overview_cmake.html +{target="_blank"} + +[wxWidgets has instructions for building with Cmake]. My other +libraries do not, and require their own idiosyncratic build scripts, +and I doubt that I can do what the authors were disinclined to do. +Presumably I could fix this with `add_custom_target` and +`add_custom_command`, where the custom command is bash script +that just invokes the author's scripts, but I just do not understand +the documentation for these commands, which documentation + resupposes knowledge of the incomprehensible domain specific language. + +`Cmake` runs on both Windows and Linux, and is a replacement for autotools, that runs only on Linux. + +Going with `cmake` means you have a defined standard cross platform development environment, `vscode` which is wholly open source, and a defined standard cross platform packaging system, or rather four somewhat equivalent standard packaging systems, two for each platform. + +Instead of + +```bash +./configure +make +make install +``` + +We have + +```bat +cmake .. +cmake --build . +cmake --install . +``` + +`cmake --install` installs from source, and has a pipeline (`cpack`) +to generate `*.msi` through [NSIS]. Notice it does *not* have a pipeline +through Wix and Wax. It also has a pipeline to Choco, and, on linux, +to `*.deb` and `*.rpm`. + +No uninstall, which has to be hand written for your distribution. + +`cmake` has the huge advantage that with certain compilers, far from +all of them, it integrates with the vscode ide, including a graphical +debugger that runs on both windows and linux. Which otherwise +you really do not have on linux. + +It thus provides maximum cross platform portability. On the other +hand, all of my libraries rely on `.configure && make && make install` +on linux, and on visual studio on Windows. In my previous +encounter with `cmake`, I found mighty good reason for doing it that +way. The domain specific language of `CMakeLists.txt` is arcane, +unreadable, unwriteable, and subject to frequent, arbitrary, +inconsistent, and illogical ad hoc change. It inexplicably does +remarkably complicated things without obvious reason or purpose, +which strange complexity usually does things you do not want. + +Glancing through their development blog, I keep seeing major +breaking changes being corrected by further major breaking +changes. Internals are undocumented, subject to surprising change, +and likely to change further, and you have to keep editing them, +without any clearly knowable boundary between what is internal +stuff that you should not need to look at and edit, and what is the +external language that you are supposed to use to define what +`cmake` is supposed to accomplish. It is not obvious how to tell `cmake` to do a certain thing, and looking at a `CmakeLists.txt` file, not at all obvious what `cmake` is going to do. And when the next +version comes out, probably going to do something different. + +But allegedly the domain specific language of `./configure` has +grown a multitude of idiosyncrasies, making it even worse. + +`ccmake` is a graphical tool that will do some editing of +`CMakeLists.txt` with respect for the mysterious undocumented +arcane syntax of the nowhere explained or documented domain +specific language. + +# Library Package managers + +Lately, however, library package managers have appeared: Conan and [vcPkg](https://blog.kitware.com/vcpkg-a-tool-to-build-open-source-libraries-on-windows/). Conan lacks wxWidgets, and has far fewer packages than [vcpkg](https://libraries.io/github/Microsoft/vcpkg). I have attempted to use package managers, and not found them very useful. It is easier to deal with each package as its own unique special case. The diff --git a/docs/libraries/review_of_crypto_libraries.md b/docs/libraries/review_of_crypto_libraries.md index 93860b3..9141f3f 100644 --- a/docs/libraries/review_of_crypto_libraries.md +++ b/docs/libraries/review_of_crypto_libraries.md @@ -4,15 +4,249 @@ title: Review of Cryptographic libraries # Noise Protocol Framework -[Noise](https://noiseprotocol.org/) is an architecture and a design document, - not source code. Example source code exists for it, though the [C example] -(https://github.com/rweather/noise-c) uses a build architecture that may not -fit with what you want, and uses protobuf, while you want to use Cap’n -Proto or roll your own serialization. It also is designed to use several -different implementations of the core crypto protocols, one of them being -libsodium, while you want a pure libsodium only version. It might be easier -to implement your own version, using the existing version as a guide. -Probably have to walk through the existing version. +The Noise Protocol Framework matters because used by Wireguard to do something related to what we intend to accomplish. + +Noise is an already existent messaging protocol, implemented in +Wireguard as a UDP only protocol. + +My fundamental objective is to secure the social net, particularly the +social net where the money is, the value of most corporations being +the network of customer relationships, employee relationships, +supplier relationships, and employee roles. + +This requires that instead of packets being routed to network +addresses identified by certificate authority names and the domain +name system, they are routed to public keys that reflect a private +key derived from the master secret of a wallet. + +## Wireguard Noise + +Wireguard maps network addresses to public keys, and then to the +possessor of the secret key corresponding to that public key. We +need a system that maps names to public keys, and then packets to +the possessor of the secret key. So that you can connect to service +on some port of some computer, which you locate by its public key. + +Existing software looks up a name, finds an thirty two bit or one +twenty eight bit value, and then connects. We need that name to +map through software that we control to a durable and attested +public key, which is then, for random strangers not listed in the conf +file, locally, arbitrarily and temporarily mapped into Wireguard +subnets , which mapping is actually a local and temporary handle to +that public key, which is then mapped back to the public key, which +is then mapped to the network address of the actual owner of that +secret key by software that we control. So that software that we do +not control thinks it is using network addresses, but is actually using +local handles to public keys which are then mapped to network +address supported by our virtual network card, which sends them off, +encapsulated in Wireguard style packets identified by the public +key of their destination to a host in the cloud identified by its actual +network address, which then routes them by public key, either to a +particular local port on that host itself by public key, or to another +host by public key which then routes them eventually by public key +to a particular port. + +For random strangers on the internet, we have to in effect NAT +them into our Wireguard subnets, and we don't want them able to +connect to arbitrary ports, so we in effect give them NAT type port forwarding. + +It will frequently be convenient to have only one port forwarded +address per public key, in which case our Wireguard fork needs to +accept several public keys, one for each service. + +The legacy software process running on the client initiates a +connection to a name and a port, from a random client port. The +legacy server process receives it on the whitelisted port ignoring +the port requested, if only one incoming port is whitelisted for +this key, or to the requested whitelisted port if more than one port +is whitelisted. It replies to the original client port, which was +encapsulated, with the port being replied to encapsulated in the +message secured and identified by public key, and the receiving +networking software on the client has temporarily whitelisted that +client port for messages coming from that server key. Such +"temporary" white listing should last for a very long time, since we +might have quiet but very long lived connections. We do not want +random people on the internet messaging us, but we do want people +that we have messaged to randomly messaging at random times the +service that message them. + +One confusing problem is that stable ports are used to identify a +particular service, and random ports a particular connection, and we +have to disentangle this relationship and distinguish connection +identifiers, from service identifiers. We would like public keys to +identify services, rather than hosts but sometimes, they will not. + +Whitelist and history helps us disentangle them when connecting to +legacy software, and, within the protocol, they need to be +distinguished even though they will be lumped back together when +talking to legacy software. Internally, we need to distinguish +between connections and services. A service is not a connection. + +Note that the new Google https allows many short lived streams, +hence many connections, identified by a single server service port +and a single random client port, which ordinarily would identify a +single connection. A connection corresponds to a single concurrent +process within client software, and single concurrent process within +server software, and many messages may pass back and forth between +these two processes and are handled sequentially by those +processes, who have retrospective agreement about their total shared state. + +So we have four very different kinds of things, which old type ports +mangle together + +1. a service, which is always available as long as the host is up + and the internet is working, which might have no activity for + a very long time, or might have thousands of simultaneous + connections to computers from all over the internet +1. a connection, which might live while inactive for a very long time, + or might have many concurrent streams active simultaneously +1. a stream which has a single concurrent process attached to it + at both ends, and typically lives only to send a message and + receive a reply. A stream may pass many messages back and + forth, which both ends process sequentially. If a stream is + inactive for longer than a quite short period, it is likely to be + ungracefully terminated. Normally, it does something, and + then ends gracefully, and the next stream and the next + concurrent process starts when there is something to do. While a stream lives, both ends maintain state, albeit in a + request reply, the state lives only briefly. +1. A message. + +Representing all this as a single kind of port, and packets going +between ports of a single kind, inherently leads to the mess that we +now have. They should have been thought of as different derived +classes with from a common base class. + +[Endpoint-Independent Mapping]:https://datatracker.ietf.org/doc/html/rfc4787 +{target="_blank"} + +Existing software is designed to work with the explicit white listing +provided by port forwarding through NATs with [Endpoint-Independent Mapping], +and the implicit (but inconveniently +transient) white listing provided by NAT translation, so we make it +look like that to legacy software. To legacy client software, it is as if +sending its packets through a NAT, and to legacy server software, it +is sending its packets through a NAT with port forwarding. Albeit +we make the mapping extremely long lived, since we can rely on +stable identities and have no shortage of them. And we also want +the port mappings (actually internal port whitelistings, they would +be mappings if this was actual NAT) associated with each such +mapping to be extremely stable and long lived. + +[Endpoint-Independent Mapping] means that the NAT reuses the +address and port mapping for subsequent packets sent from the +same internal port (X:x) to any external IP address and port (Y:y). +X1':x1' equals X2':x2' for all values of Y2:y2, which our architecture +inherently tends to force unless we do something excessively clever, +since we should not muck with ports randomly chosen. For us, [Endpoint-Independent Mapping] means that the mapping between +external public keys of random strangers not listed in our +configuration files, and the internal ranges of the Wireguard fork +interface is stable, very long lived and *independent of port numbers*. + +## Noise architecture + +[Noise](https://noiseprotocol.org/) is an architecture and a design document, not source code. +Example source code exists for it, though the [C example] +(https://github.com/rweather/noise-c) uses a build architecture that +may not fit with what I want, and uses protobuf, enemy software. It +also is designed to use several different implementations of the +core crypto protocols, one of them being libsodium, while I want a +pure libsodium only version. It might be easier to implement my +own version, using the existing versions as a guide, in particular and +especially Wireguard's version, since it is in wide use. Probably have +to walk through the existing version. + +Noise is built around the ingenious central concept of using as the +nonce the hash of past shared and acknowledged data, which is +AEAD secured but sent in the clear. Which saves significant space +on very short messages, since you have to secure shared state +anyway. It regularly and routinely renegotiates keys, thus has no $2^{64}$ +limit on messages. A 128 bit hash sample suffices for the nonce, +since the nonce of the next message will reflect the 256 bit hash of +the previous message, hence contriving a hash that has the same +nonce does the adversary no good. It is merely a denial of service. + +I initially thought that this meant it had to be built on top of a +reliable messaging protocol, and it tends to be described as if it did, +but Wireguard uses a bunch of designs and libraries in its protocol, +with Noise pulling most of them together, and I need to copy, +rather than re-invent their work. + +On the face of it, Wireguard does not help with what I want to do. +But I am discovering a whole lot of low level stuff related to +maintaining a connection, and Wireguard incorporates that low level stuff. + +Noise goes underneath, and should be integrated with, reliable +messaging. It has a built in message limit of 2^16 bytes. It is not +just an algorithm, but very specific code. + +Noise is messaging code. Here now, and present in Wireguard, +as a UDP only cryptographic protocol. I need to implement my +messaging system as a fork of Wireguard. + +Wireguard uses base64, and my bright idea of slash6 gets in the +way. Going to use base52 for any purposes for which my bright idea +would have been useful, so should be rewritten to base64 regardless. + +Using the hash of shared state goes together with immutable +append only Merkle-patricia trees like ham and eggs, though you +don't need to keep the potentially enormous data structure around. +When a connection has no activity for a little while, you can discard +everything except a very small amount of data, primarily the keys, +the hash, the block number, the MTU, and the expected timings. + +The Noise system for hashing all past data is complicated and ad +hoc. For greater generality and more systematic structure, for a +simpler fundamental structure with fewer arbitrary decisions about +particular types of data, needs to be rewritten as hashing like an +immutable append only Patricia Merkle tree. Which instantly and +totally breaks interoperability with existing Wireguard, so to talk +to the original Wireguard, has to know what it is talking to. +Presumably Wireguard has a protocol negotiation mechanism, that +you can hook. If it does not, well, it breaks and the nature of the +thing that public key addresses has to be flagged anyway, since I +am using Ristretto public keys, and they are not. Also, have to move +Wireguard from NACL encryption to Libsodium encryption, because +NACL is an attack vector. + +Wireguard messages are distinguishable on the wire, which is odd, +because Noise messages are inherently white noise, and destination +keys are known in advance. Looks like enemy action by the bad guys at NACL. + +I think a fork that if a key is an legacy key type, talks legacy +wireguard, and if a new type (probably coming from our domain +name system), though it can also be placed in `.conf` files) talks +with packets indistinguishable from white noise to an adversary that +does not know the key. + +Old type session initiation messages are distinguishable from +random noise. For new type session initiation messages to a server +with an old type id and a new type id on the same port, make sure +that the new type session initiation packet does not match, which +may require both ends to try a variety of guesses if its expectations +are violated. Which opens a DOS attack, but that is OK. You just +shut down that connection. DOS resistance is going to require +messages readily distinguishable from random noise, but we don't +send those messages unless facing workloads suggestive of DOS, +unless under heavy session initiation load. + +Ristretto keys are uncommon, and are recognizable as ristretto +keys, but not if they are sent in unreduced form. + +Build on top a fork of Wireguard a messaging system that delivers +messages not to network addresses, but to Zooko names (which +might well map to a particular port on a particular host, but whose +network address and port may change without people noticing or caring.) + +Noise is a messaging protocol. Wireguard is a messaging protocol +built on top of it that relies on public keys for routing messages. +Most of the work is done. It is not what I want built, but it has an +enormous amount of commonality. I plan a very different +architecture, but that is a re-arrangement of existing structures +already done. I am going to want Kademlia and a blockchain for the +routing, rather than a pile of local text files mapping IPs to nameless +public keys. Wireguard is built on `.conf` text files the way the +Domain name system was built on `host` files. It almost does the job, +needs a Kamelia based domain name system on top and integrated with it. # [Libsodium](./building_and_using_libraries.html#instructions-for-libsodium) @@ -50,7 +284,7 @@ Amber library packages all these in what is allegedly easy to incorporate form, The fastest library I can find for pairing based crypto is [herumi](https://github.com/herumi/mcl).  -How does this compare to [Curve25519](https://github.com/bernedogit/amber)?  +How does this compare to [Curve25519](https://github.com/bernedogit/amber)? There is a good discussion of the performance tradeoff for crypto and IOT in [this Internet Draft](https://datatracker.ietf.org/doc/draft-ietf-lwig-crypto-sensors/), currently in IETF last call:  @@ -71,4 +305,4 @@ that document, nor any evaluations of the time required for pairing based cryptography in that document. Relic-Toolkit is not Herumi and is supposedly markedly slower than Herumi.  -Looks like I will have to compile the libraries myself and run tests on them.  \ No newline at end of file +Looks like I will have to compile the libraries myself and run tests on them. diff --git a/docs/libraries/scripting.md b/docs/libraries/scripting.md index 4a8123b..206d78d 100644 --- a/docs/libraries/scripting.md +++ b/docs/libraries/scripting.md @@ -11,17 +11,69 @@ scripts that can interact with the recipient within a sandbox. Not wanting to repeat the mistakes of the internet, we will want the same bot language generating responses, and interacting with the recipient. -There is a [list](https://github.com/dbohdan/embedded-scripting-languages) of embeddable scripting languages. +There is a [list](https://github.com/dbohdan/embedded-scripting-languages){target="_blank"} of embeddable scripting languages. Lua and python are readily embeddable, but [the language shootout](https://benchmarksgame-team.pages.debian.net/benchmarksgame/) tells us they are terribly slow. +[Embedding LuaJIT in 30 minutes]:https://en.blog.nic.cz/2015/08/12/embedding-luajit-in-30-minutes-or-so/ +{target="_blank"} + + Lua, however, has `LuaJIT`, which is about ten times faster than `Lua`, which makes it only about four or five times slower than JavaScript under `node.js`. It is highly portable, though I get the feeling that porting it to windows is going to be a pain, but then it is never going to be expected to call the windows file and gui operations. +Other people say it is faster than JavaScript, but avoid comparison +to Nodejs. But it is allegedly faster than any JavaScript I am likely to +be able to embed. + +[Embedding LuaJIT in 30 minutes] gives as its example code a DNS +server written in embedded LUA. Note that it is very easy to embed +LUAJIT in such a way that the operations run amazingly slow. + +Web application firewall that is used by Cloudfare was rewritten from +37000 lines of C to 2000 lines of lua. And it handles requests in +2ms. But it needed profiling and all that to find the slowdown +gotchas - everyone who uses LUA JIT winds up spending some time and +effort to avoid horrible pointless slow operations and they need +to know what they are doing. + +He recommends embedded LUA JIT as easier to embed and +interface to C than straight LUA. Well, it would be easier for +someone who has an good understanding of what is happening +under the hood. + +He also addresses sandboxing. Seems that LUA JIT sandboxes just +fine (unlike Javascript). + +Checking his links, it seems that embedded LUA JIT is widely used in +many important applications that need to be fast and have a great +deal of money behind them. + +LUA JIT is not available on the computer benchmarks shootout. The +JIT people say the shootout maintainer is being hostile and difficult, +the shootout maintainer is kind of apt to change the subject and +imply the JIT people are not getting off their asses, but I can see +they have done a decent amount of work to get their stuff included. + +LUA JIT is a significantly different dialect to LUA, and tends to get +rather different idioms as a result of profiling driven optimization. + +It looks like common LUA idioms are apt to bite in LUA JIT for +reasons that are not easy to intuit. Thus there is in effect some hand +precompiling for LUA JIT. + +Anecdotal data on speed for LUA JIT: + +* lua-JIT is faster than Java-with-JIT (the sun Java), lua-JIT is faster than V8 (Javascript-with-JIT), etc, ... + +* As Justin Cormack notes in the comments to the answer below, it is crucial to remark that JITed calls to native C functions (rather than lua_CFunctions) are extremely fast (zero overhead) when using the LuaJIT ffi. That's a double win: you don't have to write bindings anymore, and you get on-the-metal performance. Using LJ to call into a C library can yield spooky-fast performance even when doing heavy work on the Lua side. + +* I am personally surprised at luajit's performance. We use it in the network space, and its performance is outstanding. I had used it in the past is a different manner, and its performance was 'ok'. Implementation architecture really is important with this framework. You can get C perf, with minimal effort. There are limitations, but we have worked around all of them now. Highly recommended. + Lisp is sort of embeddable, startlingly fast, and is enormously capable, but it is huge, and not all that portable. diff --git a/docs/mkdocs.sh b/docs/mkdocs.sh index 06137d1..e2ed3ef 100644 --- a/docs/mkdocs.sh +++ b/docs/mkdocs.sh @@ -64,7 +64,33 @@ do # echo " $base.html up to date" fi done -cd ../rootDocs +cd .. +cd names +templates="../pandoc_templates" +options=$osoptions"--toc -N --toc-depth=5 --wrap=preserve --metadata=lang:en --include-in-header=$templates/icondotdot.pandoc --include-before-body=$templates/beforedotdot.pandoc --css=$templates/style.css --include-after-body=$templates/after.pandoc -o" +for f in *.md +do + len=${#f} + base=${f:0:($len-3)} + if [ $f -nt $base.html ]; + then + katex="" + for i in 1 2 3 4 + do + read line + if [[ $line =~ katex ]]; + then + katex=" --katex=./" + fi + done <$f + pandoc $katex $options $base.html $base.md + echo "$base.html from $f" + #else + # echo " $base.html up to date" + fi +done +cd .. +cd rootDocs templates="../pandoc_templates" for f in *.md do diff --git a/docs/name_system.md b/docs/name_system.md deleted file mode 100644 index 27a6960..0000000 --- a/docs/name_system.md +++ /dev/null @@ -1,350 +0,0 @@ ---- -title: Name System -... -We intend to establish a system of globally unique wallet names, to resolve -the security hole that is the domain name systm, though not all wallets will -have globally unique names, and many wallets will have many names. - -Associated with each globally unique name is set of name servers. When one’s -wallet starts up, then if your wallet has globally unique name, it logs in -to its name server, which will henceforth direct people to that wallet. If -the wallet has a network accessible tcp and/or UDP address it directs people -to that address (one port only, protocol negotiation will occur once the -connection is established, rather than protocols being defined by the port -number). If not, will direct them to a UDT4 rendevous server, probably itself. - -We probably need to support [uTP for the background download of bulk data]. -This also supports rendevous routing, though perhaps in a different and -incompatible way, excessively married to the bittorrent protocol.We might -find it easier to construct our own throttling mechanism in QUIC, -accumulating the round trip time and square of the round trip time excluding -outliers, to form a short term and long term average and variance of the -round trip time, and throttling lower priority bulk downloads and big -downloads when the short term average rises above the long term average by -more than the long term variance. The long term data is zeroed when the IP -address of the default gateway(router) is acquired, and is timed out over a -few days. It is also ceilinged at a couple of seconds. - -[uTP for the background download of bulk data]: https://github.com/bittorrent/libutp - -In this day and age, a program that lives only on one machine really is not -much of a program, and the typical user interaction is a user driving a gui -on one machine which is a gui to program that lives on a machine a thousand -miles away. - -We have a problem with the name system, the system for obtaining network -addresses, in that the name system is subject to centralized state control, -and the TCP-SSL system is screwed by the state, which is currently seizing -crimethink domain names, and will eventually seize untraceable crypto -currency domain names. - -In today’s environment, it is impossible to speak the truth under one’s true -name, and dangerous to speak the truth even under any durable and widely used -identity. Therefore, people who post under names tend to be unreliable. -Hence the term “namefag”. If someone posts under his true name, he is a -“namefag” – probably unreliable and lying. Even someone who posts under a -durable pseudonym is apt show excessive restraint on many topics. - -The aids virus does not itself kill you. The aids virus “wants” to stick -around to give itself lots of opportunities to infect other people, so wants -to disable the immune system for obvious reasons. Then, without a immune -system, something else is likely to kill you. - -When I say “wants”, of course the aids virus is not conscious, does not -literally want anything at all. Rather, natural selection means that a virus -that disables the immune system will have opportunities to spread, while a -virus that fails to disable the immune system only has a short window of -opportunity to spread before the immune system kills it, unless it is so -virulent that it likely kills its host before it has the opportunity to -spread. - -Similarly, a successful memetic disease that spreads through state power, -through the state system for propagation of official truth “wants” to disable -truth speaking and truth telling – hence the replication crisis, peer -review, and the death of science. We are now in the peculiar situation that -truth is best obtained from anonymous sources, which is seriously suboptimal. -Namefags always lie. The drug companies are abandoning drug development, -because science just does not work any more. No one believes their research, -and they do not believe anyone else’s research. - -It used to be that there were a small number of sensitive topics, and if you -stayed away from those, you could speak the truth on everything else, but now -it is near enough to all of them that it might as well be all of them, hence -the replication crisis. Similarly, the aids virus tends to wind up totally -suppressing the immune system, even though more selective shutdown would -serve its interests more effectively, and indeed the aids virus starts by -shutting down the immune system in a more selective fashion, but in the end -cannot help itself from shutting down the immune system totally. - -The memetic disease, the demon, does not “want” to shut down truth telling -wholesale. It “wants” to shut down truth telling selectively, but inevitably, -there is collateral damage, so it winds up shutting down truth telling -wholesale. - -To exorcise the demon, we need a prophet, and since the demon occupies the -role of the official state church, we need a true king. Since there is a -persistent shortage of true Kings, I here speaking as engineer rather than a -prophet, so here I am discussing the anarcho agorist solution to anarcho -tyranny, the technological solution, not the true king solution. - -Because of the namefag problem and the state snatching domain names, we need, -in order to operate an untraceable blockchain based currency not only a -decentralized system capable of generating consensus on who owns what cash, -we need a system capable of generating consensus on who who owns which human -readable globally unique names, and the mapping between human readable names, -Zooko triangle names (which correspond to encryption public keys), and -network addresses, a name system resistant to the state’s attempts to link -names to jobs, careers, and warm bodies that can be beaten up or imprisoned, -to link names to property, to property that can be confiscated or destroyed. - -A transaction output can hold an amount of currency, or a minimum amount of -currency and a name. Part of the current state, which every block contains, -is unused transaction outputs sorted by name. - -If we make unused transaction outputs sorted by name available, might as well -make them available sorted by key. - -In the hello world system, we will have a local database mapping names to -keys and to network addresses. In the minimum viable product, a global -consensus database. We will, however, urgently need a rendezvous system that -allows people to set up wallets and peers without opening ports on stable -network address to the internet. Arguably, the minimum viable product will -have a global database mapping between keys and names, but also a nameserver -system, wherein a host without a stable network address can login to a host -with a stable network address, enabling rendezvous. When one identity has its -name servers registered in the global consensus database, it always tries to -login to those and keep the connection alive with a ping that starts out -frequent, and then slows down on the Fibonacci sequence, to one ping every -1024 secondsplus a random number modulo 1024 seconds. At each ping, tells the -server when the next ping coming, and if the server does not get the -expected ping, server sends a nack. If the server gets no ack, logs the -client out. If the client gets no ack, retries, if still no ack, tries to -login to the next server. - -In the minimum viable product, we will require everyone operating a peer -wallet to have a static IP address and port forwarding for most functionality -to work, which will be unacceptable or impossible for the vast majority of -users, though necessarily we will need them to be able to receive money -without port forwarding, a static IP, or a globally identified human readable -name, by hosting their client wallet on a particular peer. Otherwise no one -could get crypto currency they would need to set up a peer. - -Because static IP is a pain, we should also support nameserver on the state -run domain name system, as well as nameserver on our peer network, but that -can wait a while. And in the end, when we grow so big that every peer is -itself a huge server farm, when we have millions of users and a thousand or -so peers, the natural state of affairs is for each peer to have a static IP. - -Eventually we want people to be able to do without static IPs and -portforwarding, which is going to require a UDP layer. One the other hand, we -only intend to have a thousand or so full peers, even if we take over and -replace the US dollar as the world monetary system. Our client wallets are -going to be the primary beneficiaries of rendevous UDT4.8 routing over UDP. - -We also need names that you can send money to, and name under which you can -receives. The current cryptocash system involves sending money to -cryptographic identifiers, which is a pain. We would like to be able to send -and receive money without relying on identifiers that look like line noise. - -So we need a system similar to namecoin, but namecoin relies on proof of -work, rather than proof of stake, and the state’s computers can easily mount -a fifty one percent attack on proof of work. We need a namecoin like system -but based on proof of stake, rather than proof of work, so that for the state -to take it over, it would need to pay off fifty one percent of the -stakeholders – and thus pay off the people who are hiding behind the name -system to perform untraceable crypto currency transactions and to speak the -unspeakable. - -For anyone to get started, we are going to have to enable them to operate a -client wallet without IP and port forwarding, by logging on to a peer wallet. -The minimum viable product will not be viable without a client wallet that -you can use like any networked program. A client wallet logged onto a peer -wallet automatically gets the name `username.peername`. The peer could give -the name to someone else though error, malice or equipment failure, but the -money will remain in the client’s wallet, and will be spendable when he -creates another username with another peer. Money is connected to wallet -master secret, which should never be revealed to anyone, not with the -username. So you can receive money with a name associated an evil nazi -identity as one username on one peer, and spend it with a username associated -with a social justice warrior on another peer. No one can tell that both -names are controlled by the same master secret. You send money to a username, -but it is held by the wallet, in effect by the master secret, not by the -user name. That people have usernames, that money goes from one username to -another, makes transferring money easy, but by default the money goes through -the username to the master secret behind the quite discardable username, -thus becomes anonymous, not merely pseudonymous after being received. Once -you have received the money, you can lose the username, throw it away, or -suffer it being confiscated by the peer, and you, not the username, still -have the money. You only lose the money if someone else gets the master -secret. - -You can leave the money in the username, in which case the peer hosting your -username can steal it, but for a hacker to steal it he needs to get your -master secret and logon password, or you transfer it to the master secret on -your computer, in which case a hacker can steal it, but the peer cannot, and -also you can spend it from a completely different username. Since most people -using this system are likely to be keen on privacy, and have no good reason -to trust the peer, the default will be for the money to go from the username -to the master secret. - -Transfers of money go from one username to another username, and this is -visible to the person who sent it and the person who received it, but if the -transfer is to the wallet and the master secret behind the username, rather -than to the username, this is not visible to the hosts. Money is associated -with a host and this association is visible, but it does not need to be the -same host as your username. By default, money is associated with the host -hosting the username that receives it, which is apt to give a hint to which -username received it, but you can change this default. If you are receiving -crypto currency under one username, and spending it under another username on -another host, it is apt to be a good idea to change this default to the host -that is hosting the username that you use for spending, because then spends -will clear more quickly. Or if both the usernames and both the hosts might -get investigated by hostile people, change the default to a host that is -hosting your respectable username that you do not use much. - -We also need a state religion that makes pretty lies low status, but that is -another post. - -# Mapping between globally unique human readable names and public keys - -The blockchain provides a Merkle-patricia dac of human readable names. Each -human readable name links to a list of signatures transferring ownership form -one public key to the next, terminating in an initial assignment of the name -by a previous block chain consensus. A client typically keeps a few leaves -of this tree. A host keeps the entire tree, and provides portions of the tree -to each client. - -When two clients link up by human readable name, they make sure that they are -working off the same early consensus, the same initial grant of user name by -an old blockchain consensus, and also off the same more recent consensus, -for possible changes in the public key that has rightful ownership of that -name. If they see different Merkle hashes at the root of their trees, the -connection fails. Thus the blockchain they are working from has to be the -same originally, and also the same more recently. - -This system ensures we know and agree what the public key associated with a -name is, but how do we find the network address? - -# Mapping between public keys and nework addresses - -## The Nameserver System - -Typically someone is logged in to a host with an identity that looks like an -email address, `paf.foo.bar`, where`bar` is the name of a host that is -reliably up, and reliably on the network, and relatively easy to find - -You can ask the host `bar` for the public key and *the network address* of -`foo.bar`, or conversely the login name and network address associated with -this public key. Of course these values are completely subject to the caprice -of the owner of `bar`. And, having obtained the network address of `foo.bar`, -you can then get the network address of `paf.foo.bar` - -Suppose someone owns the name `paf`, and you can find the global consensus as -to what public key controls `paf`, but he does not have a stable network -address. He can instead provide a nameserver – another entity that will -provide a rendevous. If `paf` is generally logged in to `foo`, you can -contact `foo`, to get rendevous data for `bar.foo`, which is, supposing `foo` -to be well behaved, rendevous data for `bar` - -Starting from a local list of commonly used name server names, keys, and -network addresses, you eventually get a live connection to the owner of that -public key, who tells you that at the time he received your message, the -information is up to date, and, for any globally unique human readable names -involved in setting up the connection, he is using the same blockchain as you -are using. - -Your local list of network addresses may well rapidly become out of date. -Information about network addresses flood fills through the system in the -form of signed assertions about network addresses by owners of public keys, -with timeouts on those assertions, and where to find more up to date -information if the assertion has timed out, but we do not attempt to create a -global consensus on network addresses. Rather, the authoritative source of -information about a network address of a public key comes from successfully -performing a live connection to the owner of that public key. You can, and -probably should, choose some host as the decider on the current tree of -network addresses, but we don’t need to agree on the host. People can work -off slightly different mappings about network addresses with no global and -complete consensus. Mappings are always incomplete, out of date, and usually -incomplete and out of date in a multitude of slightly different ways. - -We need a global consensus, a single hash of the entire blockchain, on what -public keys own what crypto currency and what human readable names. We do not -need a global consensus on the mapping between public keys and network -addresses. - -What you would like to get is an assertion that `paf.foo.bar` has public key -such and such, and whatever you need to make network connection to -`paf.foo.bar`, but likely `paf.foo.bar` has transient public key, because his -identity is merely a username and login at `foo.bar`, and transient network -address, because he is behind nat translation. So you ask `bar` about -`foo.bar`, and `foo.bar` about `paf.foo.bar`, and when you actually contact -`paf.foo.bar`, then, and only then, you know you have reliable information. -But you don’t know how long it is likely to remain reliable, though -`paf.foo.bar` will tell you (and no other source of information is -authoritative, or as likely to be accurate). - -Information about the mapping between public keys and network addresses that -is likely to be durable flood fills through the network of nameservers. - -# logon identity - -Often, indeed typically, `ann.foo` contacts `bob.bar`, and `bob.bar` needs -continuity information, needs to know that this is truly the same `ann.foo` -as contacted him last time – which is what we currently do with usernames and -passwords. - -The name `foo` is rooted in a chain of signatures of public keys and requires -a global consensus on that chain. But the name `ann.foo` is rooted in logon -on `foo`. So `bob.bar` needs to know that `ann.foo` can log on with `foo`, -which `ann.foo` does by providing `bob.bar` with a public key signed by `foo`, -which might be a transient public key generated the last time she logged -on, which will disappear the moment her session on her computer shuts down, -or might be a durable public key. But if it is a durable public key, this -does not give her any added security, since `foo` can always make up a new -public key for anyone he decides to call `ann.foo` and sign it, so he might -as well put a timeout on the key, and `ann.foo` might as well discard it when -her computer turns off or goes into sleep mode. So, it is in everyone’s -interests (except that of attackers) that only root keys are durable. - -`foo`’s key is durable, and information about it is published.`ann.foo`’s -key is transient, and information about it always obtained directly from -`ann.foo` as a result of `ann.foo` logging in with someone, or as a result of -someone contacting `foo` with the intent of logging in to `ann.foo`. - -But suppose, as is likely, the network address of `foo` is not actually all -that durable, is perhaps behind a NAT. In that case, it may well be that to -contact `foo`, you need to contact `bar`. - -So, `foo!bar` is `foo` logged in on `bar`, but not by a username and -password, but rather logged on by his durable public key, attested by the -blockchain consensus. So, you get an assertion, flood filled through the -nameservers, that the network address of the public key that the blockchain -asserts is the rightful controller of `foo`, is likely to be found at `foo!` -(public key of `bar`), or likely to be found at `foo!bar`. - -Logons by durable public key will work exactly like logons by username and -password, or logons by derived name. It is just that the name of the entity -logged on has a different form.. - -Just as openssh has logons by durable public key, logons by public key -continuity, and logons by username and password, but once you are logged on, -it is all the same, you will be able to logon to `bob.bar` as `ann.bob.bar`, -meaning a username and password at `bob.bar`, as `ann.foo`, meaning `ann` has -a single signon at `foo`, a username and password at `foo`, or as `ann`, -meaning `ann` logs on to `bob.bar` with a public key attested by the -blockchain consensus as belonging to `ann`. - -And if `ann` is currently logged on to `bob.bar` with a public key attested -by the blockchain consensus as belonging to `ann`, you can find the current -network address of `ann` by asking `bob.bar` for the network address of -`ann!bob.bar` - -`ann.bob.bar` is whosoever `bob.bar` decides to call `ann.bob.bar`, but -`ann!bob.bar` is an entity that controls the secret key of `ann`, who is at -this moment logged onto `bob.bar`. - -If `ann` asserts her current network address is likely to last a long time, -and is accessible without going through - -`bob.bar` then that network address information will flood fill through the -network. Less useful network address information, however will not get far. diff --git a/docs/peering_through_nat.md b/docs/peering_through_nat.md deleted file mode 100644 index 883dd11..0000000 --- a/docs/peering_through_nat.md +++ /dev/null @@ -1,148 +0,0 @@ ---- -lang: en -title: Peering through NAT -... -A library to peer through NAT is a library to replace TCP, the domain -name system, SSL, and email. This is covered at greater length in -[Replacing TCP](replacing_TCP.html) - -# Implementation issues - -There is a great [pile of RFCs](./replacing_TCP.html) on issues that arise with using udp and icmp -to communicate. - -## timeout - -The NAT mapping timeout is officially 20 seconds, but I have no idea -what this means in practice. I suspect each NAT discards port mappings -according to its own idiosyncratic rules, but 20 seconds may be a widely respected minimum. - -The official maximum time that should be assumed is two minutes, but -this is far from widely implemented, so keep alives often run faster. - -Minimum socially acceptable keep alive time is 15 seconds. To avoid -synch loops, random jitter in keep alives is needed. This is discussed at -length in [RFC2450](https://datatracker.ietf.org/doc/html/rfc5405) - -An experiment on [hole punching] showed that most NATs had a way -longer timeout, and concluded that the way to go was to just repunch as -needed. They never bothered with keep alive. They also found that a lot of -the time, both parties were behind the same NAT, sometimes because of -NATs on top of NATs - -[hole punching]:http://www.mindcontrol.org/~hplus/nat-punch.html -"How to communicate peer-to-peer through NAT firewalls" -{target="_blank"} - -Another source says that "most NAT tables expire within 60 seconds, so -NAT keepalive allows phone ports to remain open by sending a UDP -packet every 25-50 seconds". - -The no brainer way is that each party pings the other at a mutually agreed -time every 15 seconds. Which is a significant cost in bandwidth. But if a -server has 4Mib/s of internet bandwidth, can support keepalives for couple -of million clients. On the other hand, someone on cell phone data with thirty -peers is going to make a significant dent in his bandwidth. - -With client to client keepalives, probably a client will seldom have more -than dozen peers. Suppose each keepalive is sent 15 seconds after the -counterparty's previous packet, or an expected keepalive is not received, -and each keepalive acks received packets. If not receiving expected acks -or expected keepalives, we send nack keepalives (hello-are-you-there -packets) one per second, until we give up. - -This algorithm should not be baked in stone, but rather should be an -option in the connection negotiation, so that we can do new algorithms as -the NAT problem changes, as it continually does. - -If two parties are trying to setup a connection through a third party broker, -they both fire packets at each other (at each other's IP as seen by the -broker) at the same broker time minus half the broker round trip time. If -they don't get a packet in the sum of the broker round trip times, keep -firing with slow exponential backoff until connection is achieved,or until -exponential backoff approaches the twenty second limit. - -Their initial setup packets should be steganographed as TCP startup -handshake packets. - -We assume a global map of peers that form a mesh whereby you can get -connections, but not everyone has to participate in that mesh. They can be -clients of such a peer, and only inform selected counterparties as to whom -they are a client of. - -The protocol for a program to open port forwarding is part of Universal Plug and Play, UPnP, which was invented by Microsoft but is now ISO/IEC 29341 and is implemented in most SOHO routers. - -But is it generally turned off by default, or manually. Needless to say, if relatively benign Bitcoin software can poke a hole in the -firewall and set up a port forward, so can botnet malware. - -The standard for poking a transient hole in a NAT is STUN, which only works for UDP – but generally works – not always, but most of the time. This problem everyone has dealt with, and there are standards, but not libraries, for dealing with it. There should be a library for dealing with it – but then you have to deal with names and keys, and have a reliability and bandwidth management layer on top of UDP. - -But if our messages are reasonably short and not terribly frequent, as client messages tend to be, link level buffering at the physical level will take care of bandwidth management, and reliability consists of message received, or message not received. For short messages between peers, we can probably go UDP and retry. - -STUN and ISO/IEC 29341 are incomplete, and most libraries that supply implementations are far too complete – you just want a banana, and you get the entire jungle. - -Ideally we would like a fake or alternative TCP session setup, using raw -sockets and then you get a regular standard TCP connection on a random -port, assuming that the target machine has that service running, and the -default path for exporting that service results in window with a list of -accessible services, and how busy they are. Real polish would be hooking -the domain name resolution so that looking up the names in the peer top -level domain create a a hole, using fake TCP packets sent through a raw -socket. then return the ip of that hole. One might have the hole go through -wireguard like network interface, so that you can catch them coming and -going. - -Note that the internet does not in fact use the OSI model though everyone talks as if it did. Internet layers correspond only vaguely to OSI layers, being instead: - -1. Physical -2. Data link -3. Network -4. Transport -5. Application - -And I have no idea how one would write or install one’s own network or -transport layer, but something is installable, because I see no end of -software that installs something, as every vpn does, wireguard being the simplest. - ------------------------------------------------------------------------- - -Assume an identity system that finds the entity you want to -talk to. - -If it is behind a firewall, you cannot notify it, cannot -send an interrupt, cannot ring its phone. - -Assume the identity system can notify it. Maybe it has a -permanent connection to an entity in the identity system. - -Your target agrees to take the call. Both parties are -informed of each other’s IP address and port number on which -they will be taking the call by the identity system. - -Both parties send off introduction UDP packets to the -other’s IP address and port number – thereby punching holes -in their firewall for return packets. When they get -a return packet, an introduction acknowledgement, the -connection is assumed established. - -It is that simple. - -Of course networks are necessarily non deterministic, -therefore all beliefs about the state of the network need to -be represented in a Bayesian manner, so any -assumption must be handled in such a manner that the -computer is capable of doubting it. - -We have finite, and slowly changing, probability that our -packets get into the cloud, a finite and slowly changing -probability that our messages get from the cloud to our -target. We have finite probability that our target -has opened its firewall, finite probability that our -target can open its firewall, which transitions to -extremely high probability when we get an -acknowledgement – which prior probability diminishes over -time. - -As I observe in [Estimating Frequencies from Small Samples](./estimating_frequencies_from_small_samples.html) any adequately flexible representation of the state of -the network has to be complex, a fairly large body of data, -more akin to a spam filter than a Boolean. diff --git a/docs/replacing_TCP.md b/docs/replacing_TCP.md deleted file mode 100644 index 4cee798..0000000 --- a/docs/replacing_TCP.md +++ /dev/null @@ -1,1331 +0,0 @@ ---- -title: Replacing TCP, SSL, DNS, CAs, and TLS -... - -# related - -[Client Server Data Representation](client_server.html){target="_blank"} - -# Existing work - -[µTP]:https://github.com/bittorrent/libutp -"libutp - The uTorrent Transport Protocol library" -{target="_blank"} - -[µTP], Micro Transport Protocol has already been written, and it is just a -matter of copying it and embedding it where possible, and forking it if -unavoidable. DDOS resistance looks like it is going to need forking. - -It implements ledbat, a protocol designed for applications that download -bulk data in the background, pushing the network close to its limits, while -still playing nice with TCP. - -Implementing consensus over [µTP] is going to need [QUIC] style streams, -that can slow down or fail without the whole connection slowing down or -failing, though it might be easier to implement consensus that just calls -µTP for some tasks. - -I have not investigated what implementing short fixed length streams over -[µTP] would involve. Bittorrent already necessarily does something mighty -like that. Maybe it just sequentializes everything. Which kind of makes -sense, a single concurrent process managing each connection is easier to -program and comprehend, even if it cannot give optimal performance. -Obviously it must have a request response layer, documented only in -source code. The question then is how it maps that layer onto a µTP -connection. You are going to have to copy, not just µTP, but that layer, -which should be part of µTP, but probably is not. You will have to -factorize that they probably not cleanly factorized. - -Their request response layer is probably somewhat documented in -[BEP0055] I suspect that what I need is not just µTP, but the largest common factors of [BEP0055] - -[BEP0055]:https://www.bittorrent.org/beps/bep_0055.html -"BEP0055" -{target="_blank"} - -[`ut_holepunch` extension message]:http://bittorrent.org/beps/bep_0010.html -"BEP0010" -{target="_blank"} - -[libtorrent source code]:https://github.com/arvidn/libtorrent/blob/c1ade2b75f8f7771509a19d427954c8c851c4931/src/bt_peer_connection.cpp#L1421 -"bt_peer_connection.cpp" -{target="_blank"} - -µTP does not itself implement hole punching, but interoperates smoothly -with libtorrents's [BEP0055]'s [`ut_holepunch` extension message], which is -only documented in [libtorrent source code]. - -A tokio-rust based µTP system is under development, but very far from -complete last time I looked. Rewriting µTP in rust seems pointless. Just -call it from a single tokio thread that gives effect to a hundred thousand -concurrent processes. There are several projects afoot to rewrite µTP in -rust, all of them stalled in a grossly broken and incomplete state. - -[QUIC has grander design objectives]:https://docs.google.com/document/d/1RNHkx_VvKWyWg6Lr8SZ-saqsQx7rFV-ev2jRFUoVD34/edit -{target="_blank"} - -[QUIC has grander design objectives],and is a well thought out, well -designed, and well tested implementation of no end of very good and -much needed ideas and technologies, but relies heavily on enemy -controlled cryptography. - -Albeit there are some things I want to do, consensus between a small -number of peers, by invitation and each peer directly connected to each of -the others, the small set of peers being part of the consensus known to all -peers, and all peers always online and responding appropriately, or els -they get kicked out. (Practical Byzantine Fault *In*tolerant consensus) -which it really cannot do, though it might be efficient to use a different -algorithm to construct consensus, and then use µTP to download the bulk data. - -# Existing documentation - -There is a great pile of RFCs on issues that arise with using udp and icmp -to communicate, which contain much useful information. - -[RFC5405](https://datatracker.ietf.org/doc/html/rfc5405#section-3), [RFC6773](https://datatracker.ietf.org/doc/html/rfc6773), [datagram congestion control](https://datatracker.ietf.org/doc/html/rfc5596), [RFC5595](https://datatracker.ietf.org/doc/html/rfc5595), [UDP Usage Guideline](https://datatracker.ietf.org/doc/html/rfc8085) - -There is a formalized congestion control system `ECN` explicit congestion -control. Most severs ignore ECN. On a small proportion of routes, 1%, -ECN tagged packets are dropped - -Raw sockets provide greater control than UDP sockets, and allow you to -do ICMP like things through ICMP. - -I also have a discussion on NAT hole punching, [peering through nat](peering_through_nat.html), that -summarizes various people's experience. - -To get an initial estimate of the path MTU, connect a datagram socket to -the destination address using connect(2) and retrieve the MTU by calling -getsockopt(2) with the IP_MTU option. But this can only give you an -upper bound. To find the actual MTU, have to have a don't fragment field -(which is these days generally set by default on UDP) and empirically -track the largest packet that makes it on this connection. Which TCP does. - -## first baby steps - -To try and puzzle this out, I need to build a client server that can listen on -an arbitrary port, and tell me about the messages it receives, and can send -messages to an arbitrary hostname:port or network address:port, and -which, when it receives a packet that is formatted for it, will display the -information in that packet, and obey the command in that packet, which -will typically be a command to send a reply that depicts what is in the -packet it received, which probably got transformed by passing through -multiple nats, and/or a command to display what is in the packet, which is -typically a depiction of how the packet to which this packet is a reply got -transformed - -This test program sounds an awful lot like ICMP, which is best accessed -through raw sockets. Might be a good idea to give it the capability to send -ICMP, UDP, and fake TCP. - -Raw sockets provide the lowest level access to the network available from -userspace. An immense pile of obscure and complicated stuff is in kernel. - -# What the API should look like - -It should be a consensus API for consensus among a small number of -peers, rather than message API, message response being the special case -of consensus between two peers, and broad consensus being constructed\ -out of a large number of small invitation based consensi. - -A peer explicitly joins the small group when its request is acked by a -majority, and rejected by no one. - -On the other hand this involves re-inventing networking from scratch, as -compared to simply copying http/2, or some other reliable UDP system. - -Total rewrites, however desirable and necessary, always fail - -So on reflection this is a blue sky proposal - likely to involve immense delay: - -I need to think about the way things should be done - but I don't want to -get lost in the weeds. I have repeatedly wasted a great deal of time -re-inventing stuff from scratch, only to find that when I was finished, I had -something vastly inferior to what already existed, so I wound up tossing -my work, and using someone else's library with minimum adaptation. - -Many a time I see something is encrusted with ancient history, backward -compatibility means they cannot fix old mistakes, I design something new -and fresh, and vastly superior, and discover that there were one hundred -and one issues that old history encrusted thing had encountered and dealt -with, and I had not foreseen, that not all of that mighty pile of code is crap -to work around past mistakes which must continue to be supported, but a -lot of it is issues I had not foreseen having to deal with, and had not -planned a path to dealing with them. - -When implementing stuff from scratch, all too often one discovers there -are no end of reasons for all the stuff one thought bad and unnecessary in -existing libraries. - -But on with the vision. Though it will likely be vastly faster to just fix -someone else's library to have real security. - -Although the api represents messages, rather than connections, it will -implicitly have a very large number of connections, in that a connection is -your current state with a counterparty, expected protocols (message types) and all that. - -For an app to poll a very large number of connections over the network, -`select` does not cut the mustard. Network apis have been evolving, each in -its own idiosyncratic way, to the app making O(1) additions and deletions to -list of counterparties on the network whose messages it is listening to, -and getting notifications that are O(number of events) rather than -O(number of counterparties). - -The way this should be done is a linked list of data structures containing -events, which the app can poll locklessly, or wait on (with a timer event -guaranteed to appear in the list eventually if it is waiting on it). If the app -fails to free anything from the list after an unreasonably long time, -suggesting that the app has shut down ungracefully or crashed, and there -are rather too many things on the list, the process that is putting things on -the list will start by pushing back on the parties sending messages to the -app, and end by shutting down their connections and discarding their data. -The network events live entirely in memory and are volatile. If they -represent long lived relationships, it is up to the app to commit the -information that they represent to disk. - -Every message has a public key of sender, a public key of recipient, an -potentially an in-regards-to hash, a reply-to hash, and an in-reply-to hash. -Some or all of these hashes may be null. It seldom makes sense for all of -them to be null, and it seldom makes sense for all of them to be non null. -Usually reply-to is null, and it does not always make sense for it to be non -null. - -The reply-to field opens up a very large can of worms, in that its main use -is to reference a third party message that came from a third party server, -with its own type information and sender public key, and the how does the -sender know the recipient has or can obtain that message? - -Every hash and every public key represents a potential endpoint, and thus -represents an additive type, or rather gives the system potential clues on -how to discover a mutually known additive type. (Reflect on the slow and -chaotic semi automated complexity of how the many protocols involved in -sending and receiving an email message are discovered, every time, for -every email message.) - -Some of the time, the message type is only known from one of these -hashes – they imply the type information, without which the recipient -would not know how to parse the message, and the recipient has to be able -to recognize them before he can recognize anything else. And some of the -time, figuring out the message type from these hashes is non trivial or just -flat out fails. No general automatic one size fits all procedure can work on -every mysterious second party hash. This is a problem that has to be dealt -with ad hoc use case by use case, protocol by protocol, message type by -message type. - -Not all messages can be sent reliably, but the sender gets a notification -event – failed, succeeded, replied to, or unlikely to be known, and the -sender can immediately find out either the likely timing of such -notification, or that the likely timing of such notification is unknown – and -usually that the likely timing of such notification is unknown generates an -exception. - -The api is potentially multilayered – the message may well get translated -to a multitude of similarly structured messages, that set up the connection, -find out information about the recipient, all that stuff, and when those -messages go on the wire, they do not necessarily have any of this stuff – -commonly they just have the network, the port address, and some numbers -that uniquely identify the context, which numbers are unique to the -connection, but unlike the hashes from which they are derived, not -globally unique, are sequential identifiers, not hashes. But at the top level, -the network address, the port, and all that stuff is just not represented, -except implicitly in that the public key of the recipient may well get -looked up in a hash table that may well have the network address and the port. - -On the wire, network address and port serves the function of in-regards-to, -and will wrap stuff that provides a finer grained function of in-regards-to -and in-reply-to -- as I said, multilayered, with the hashes being internally -mapped to to data that serves equivalent functionality. Network address -and port being the outermost layer on the wire. - -On the wire, once a connection is established, the sender and recipient -public keys are implicit in the ip header, and rest is opaque payload, -maximum payload being 1kiB. Inside the payload, the representation -depends on the message type, which was established when the connection -was established – the in-reply-to of the contained message is the unique -sequential nonce of the message being replied to, rather than the hash of -that message. - -In the api, the application and api know the message type, because -otherwise the api just would not work. But on the rare occasions when the -message is represented globally, outside the api, *then* it needs a message type header. - -# TCP is broken - -TCP was designed in more trusting times, when the name system -consisted of a widely shared hosts file, and everyone trusted everyone. - -Over the years people have piled warts on top of TCP and warts on top of -warts to fix one problem after another, and every fix results in additional round trips - -Thus “Cloudfare is checking your browser, you will be redirected shortly” - -Every additional round trip before a web page comes up results in a -significant loss of viewers. Hence http2. Which fails to fix the DDOS and -cloudfare problem. - -TCP is a major problem, which is slowing down the internet. DDoS -protection and the certificate mess are warts growing on top of warts. - -Any business that resists corporate cancer is going to come under DDoS, -and if it employs a DDoS resistance service, that service is likely to place -pressure on the business to do political stuff that is counterproductive to -pursuing a profit. And even if it does not, the DDoS service slows down -people trying to view the business website. - -If the TCP replacement fixes those warts, you get more views. - -# Domain name system and SSL is broken - -Any organization that has a certificate authority in its pocket can perform -a man in the middle attack on an SSL connection, though the CAA domain -name record somewhat mitigates this problem. - -We need to also need to replace the TCP/SSL/CA/DNS system because -there is money in it. A great deal of money. - -The trouble with an ICO (initial coin offering), is that the issuer has no -obligation to do anything other than take the money and run. We are -moving to an economy where much of the value is “goodwill”, “goodwill” -being names with reputations and relationships. The blockchain (or -blockdag, since blockdags theoretically have better scaling than -blockchains) could be used to render this value liquid in IPOs by having -both names and money on the blockchain. - -Atomic transactions between blockchains, plus names on the blockchain -with money, a replacement for TCP/SSL/CAs/DNS could support sovereign -corporations on the blockchain, so that an ICO could be an IPO (Initial -Public Offering). If the blockchain is a name service as well as a money -service, it could give the investors ownership of the name. The owners of -examplecorp shares get to designate the board public key, and the board gets to -designate the public key of CEO@examplecorp from time to time, thus -rendering the value of a name potentially liquid. - -Cryptocurrency exchanges are run by crooks, and are full of crooks each -trying to scam all the other crooks. - -If you don’t know who the pigeon is, you are the pigeon. - -A healthy cryptocurrency market needs to leave the cryptocurrency -exchanges behind, replacing them with atomic blockchain transactions -between separate blockchains. They are dangerously centralized, and -linked to a corruptly regulated finance and accounting system, which -corruption we saw with Great Minority Mortgage Meltdown and the -Mortgage backed Security market from 2005 November to 2007, and saw -with MF Global. Jon Corzine did worse than embezzle client funds. He -embezzled client funds legally. - -Demand for crypto currencies is driven in substantial part by the fact that -recent regulations have cheerfully set aside laws on fiduciary duty that are -millennia old. The exchanges cheerfully adhere to such regulations as they -find dangerously convenient, while taking advantage of cryptocurrency to -avoid those regulations that they find inconvenient. - -The banks, the stock exchanges, and the big accounting firms are regulated -agencies whose regulators are in their pocket. The crypto currency exchanges -are semi regulated, taking advantage of regulations written for those who -have regulators in their pocket. - -The cryptocurrency market needs to get rid of exchanges, starting with -cryptocurrency exchanges, and proceeding to get rid of stock exchanges. - -An exchange exists to provide an escrow that faithfully observes -its fiduciary duty. And there have been a great many recent examples of such -entities getting up to no good, and in the case of the mortgage backed -security market, up to no good with enormous amounts of money. - -A cryptocurrency with a name system could eat their lunch, greatly enriching -its founders in the process. - -# Networking itself is broken - -But that is too hard a problem to fix. - -I had to sweat hard setting up Wireguard, because it pretends to be just -another `network adaptor` so that it can sweep away a pile of issues as out -of scope, and reading up posts and comments referencing these issues, I -suspect that almost no one understands these issues, or at least no one who -understands these issues is posting about them. They have a magic -incomprehensible incantation which works for them in their configuration, -and do not understand why it does not work for someone else in a subtly -different configuration. - -## Internet protocol too many layer of abstraction - -I have to talk internet protocol to reach other systems over the internet, but -internet protocol is a messy pile of ad hoc bits of software built on top of -ad hoc bits of software, and the reason it is hard to understand the nuts and -bolts when you actually try to do anything useful is that you do not -understand, and indeed almost no one understands, what is actually going -on at the level of network adaptors and internet switches. When you send a -udp packet, you are already at a high level of abstraction, and the -complexity that these abstractions are intended to hide leaks. - -And because you do not understand the intentionally hidden complexity -that is leaking, it bites you. - -### Adaptors and switches - -A private network consists of a bunch of `network adaptors` all connected to -one `ethernet switch` and its configuration consists of configuring -the software on each particular computer with each particular `network adaptor` -to be consistent with the configuration of each of the others connected to -the same `ethernet switch`, unless you have a `DHCP server` attached to the -network, in which case each of the machines gets a random, and all too -often changing, configuration from that `DHCP server`, but at least it is -guaranteed to be consistent with the configuration of each of the other -`network adaptors` attached to that one `ethernet switch`. Why do DHCP -configurations not live forever, why do they not acknowledge the machine -human readable name, why does the ethernet switch not have a human -readable name, and why does the DHCP server have a network address -related to that of the ethernet switch, but not a human readable name -related to that of the ethernet switch? - -What happens when you have several different network adaptors in one computer? - -Obviously an IP address range has to be associated with each network -adaptor, so that the computer can dispatch packets to the correct adaptor. -And when the network adaptor receives a packet, the computer has to -figure out what to do with it. And what it does with it is the result of a pile -of undocumented software executing a pile of undocumented scripts. - -If you manually configure each particular machine connected to an -ethernet switch, the configuration consists of arcane magic formulae -interpreted by undocumented software that differs between one system and the next. - -As rapidly becomes apparent when you have to deal with more than one -adaptor, connected to more than one switch. - -Each physical or virtual network adaptor is driven by a device driver, -which is different for each physical device and operating system. From the -point of view of the software, the device driver api *is* the network adaptor -programmer interface, and it does not care about which device driver it is, -so all network adaptors must have the same programmer interface. And -what is that interface? - -Networking is a wart built on top of warts built on top of warts. IP6 was -intended to clean up this mess, but kind of collapsed under rule by -committee, developing a multitude of arcane, overly complicated, and overly -clever cancers of its own, different from, and in part incompatible -with, the vast pile of cruft that has grown on top of IP4. - -The committee wanted to throw away the low order sixty four bits of -address space to use to post information for the NSA to mop up, and then -other people said to themselves, "this seems like a useless way to abuse -the low order sixty four bits, so let us abuse it for something else. After all, -no one is using it, nor can they use it because it is being abused". But -everyone whose internet facing host has been assigned a single address, -which means has actually been assigned $2^{64}$ addresses because he has -sixty four bits of useless address space, needs to use it, since he probably -wants to connect a private in house network through his single internet -facing host, and would like to be free to give some of his in house hosts -globally routable addresses. - -In which case he has a private network address space, which is a random -subnet of fd::/8, and a 64 bit subnet of the global address space, and what -he wants is that he can assign an in house computer a globally routable -address, whereupon anything it sends that has a destination that is not on -his private network address space, nor his subnet of the globally routable -address space, gets sent to the internet facing network interface. - -Further, he would like every computer on his network to be automatically -assigned a globally routable address if it uses a name in the global system, -or a private fd:: address if it is using a name not in the global system, so -that the first time his computer tries to access the network with the domain -name he just assigned, it gets a unique network address which will never -change, and a reverse dns that can only be accessed through an address on -his private network. And if he assigns it a globally accessible name, he -would like the global dns servers and reverse dns servers to automatically -learn that address. - -This is, at present, doable by the DDI, which updates both your DHC -server and your DNS server. Except that hardly anyone has an in house -DNS server that serves up his globally routable addresses. The I in DDI -stands for IP Address Manager or IPAM. In practice, everyone relies on -named entities having extremely durable network addresses which are a -pain and a disaster to dynamically update, or they use dynamic DNS, not IPAM. - -What would be vastly more useful and usable is that your internet facing -peer routed globally routable packets to and from your private network, -and machines booting up on your private network automatically received -addresses static addresses corresponding their name. - -Globally routable subnets can change, because of physical changes in the -global network, but this happens so rarely that a painful changeover is -acceptable. The IP6 fix for automatically accommodating this issue is a -cumbersome disaster, and everyone winds up embedding their globally -routable IP6 subnet address in a multitude of mystery magic incantations, -which, in the event of a change, have to be painstakingly hunted down and -changed one by one, so the IP6 automatic configuration system is just a -great big wart in a dinosaur's asshole. It throws away half the address -space, and seldom accomplishes anything useful. - -# Distributed Denial of Service attack - -At present, resistance to Distributed Denial of Service attacks rests on -dangerously powerful central authorities, in particular Cloudfare, whose -service in addition to being dangerously centralized, is expensive and poor. - -The TCP replacement needs an adjustable proof of work (pow) handshake -as the first part of the connection handshake, the proof of work request -being first server packet in the four packet handshake. - -First packet, client requests connection, second packet, server requests -work,and supplies a durable and a short lived public key, third packet, -client supplies work and offers transient public key, making -communication possible, plus the message it is trying to send the server, or -the first part of that message. - -The work demanded goes up as the server load increases, thus fixing the -horrors of DDoS protection. - -## Key agreement - -Key agreement needs to be part of the the TCP replacement handshake, rather -than a layer on top, to reduce round tripping. - -The name system needs to be integrated with the key system, so that you get -the key when when you get the network address associated with the name, and -the key/name pairing needs to be blockchain secured, so you don’t have one -thousand certificate authorities each with the authority to mount a man in the middle attack. - -## replacement handshake for publicly identified server - -The the TCP replacement handshake needs to be a four phase handshake. - -1. Client->Server: Give me a connection, here are my parameters, here is my -session key. - -1. Server->Client: Here is a proof of work request, my parameters, and a keyed -hash of your and my parameters. Ask again with proof of work, the same -parameters, and the keyed hash. - - Server then throws away the request, allocating no memory. - -1. Client->Server: OK, here I am again, with all that stuff you asked for. - - This includes a konce (key used once,single use elliptic point), and - assumes that the client reliably knows the server public key i - advance. This protocol is inappropriate to signons that are restricted - to identified entities, because we probably do not want everyone to - know who is - identified. - -1. Server checks the poly1305 authentication to ensure that this is a - real client reply to a real and recent server reply. Then it checks the - proof of work. - - If the proof of work passes, Server allocates memory, generates and stores a - session key, and stores connection parameters, the client and server - session keys among them. - -1. Server->Client: OK, here is my session key, authenticated but not - signed by my permanent key, and stuff, now you can start sending - actual data. - -Thus we can integrate TCP handshake and encryption hand shake and the -innumerable DDoS protection handshakes “Cloudfare is checking your browser, -oops, your browser did not pass, here is a captcha” at the cost of one single -additional trip, half a round trip. - -Instead of the person establishing the connection fuming while round trip -after round trip goes through, we get all that stuff at the cost of one -additional half round trip. - -### pow implementation - -Each sequential proof of work request contains a 64 bit sequential integer. -The integer starts at random 63 bit value, to ensure that every possible -successful proof of work ever used is unique in the universe. The -sequential integer is treated as a windowed value into a 512 bit integer, -whose high order part is an unshared secret that remains unchanged for the -duration. - -From that 512 bit value, the server generates a unique XChaCha20 512 bit -value, 256 bits of which are used to generate a Poly1305 authenticator for -the proof of work request. If it receives a completed proof of work request -containing the authentication, it knows it comes from an entity at that -network address that was able to receive the proof of work request. -Knowing it is talking to real network addresses, it can derank network -addresses that create excessive burdens, so that they cannot slow down -everyone else, only themselves. - -When it receives the completed proof of work, it first checks the sequence -number to ensure it is a recently issued request for work, then checks if -there is already a channel allocated for that pow, using a table of doubly -linked lists of recently allocated channels.indexed by the low order part of -the pow sequence number If it discovers it has already passed that proof of -work and allocated a channel, moves that proof of work to the head of list, -so that the next check will be instant, just in case it is about to receive a -million copies of that proof of work. Then it checks for revealed bits from -those generated by XChaCha20. Then it checks the work and the -Poly1305 authentication. - -Checking if there is already a channel allocated overlaps and intersects -with presence notification protocol. We want to have a very large number -of inactive presences without secrets or network addresses in the database, -a large number of long lived active presences in memory, with secrets that -are not paged to disk (`sodium_allocarray`), and considerably smaller -number of considerably shorter lived channels with flow control and -buffering. A presence can only exchange short messages that fit in one -packet, and only one message can be active in any round trip time. You -open a presence, and the presence can then open a channel. - -We probably want to do the checks in whatever order is empirically most -efficient for type of DDoS attacks that we encounter in practice, the most -common probably being garbage random values that bear no particular -resemblance to valid connection attempts. - -The next problem will valid connections that then make excessive -demands. These get deranked by the next layer, and they will then have to -make a new connection, which will face increasing pow and discrimination -against their network address. - -## replacement handshake for limited circulation server - -In this case the server is the gateway for a group, possibly many groups, -whose unique id is not widely known. It is analogous to a closely kept email address. - -The the TCP replacement handshake needs to be a four phase handshake. - -1. Client->Server: Give me a connection, here are my parameters, - here is a clue about what private group I want to connect to. - -1. Server->Client: Here is a proof of work request, my parameters, - including a use once elliptic point, and a keyed hash of your and - my parameters. Ask again with proof of work, the same parameters, - and the keyed hash. - - Server then throws away the request, allocating no memory. - -1. Client->Server: OK, here I am again, with all that stuff you asked for. - - At this point, client has given server a clue about which private - group it wants to connect to, and server has given client a clue - about which private group it expects membership of, and therefore - what public key the client should attempt to communicate with. - -1. Server checks the keyed hash to ensure that this is a real client - reply to a real and recent server reply. Then it checks the proof of - work. - - If the proof of work passes, Server allocates memory - - Then it generates a transient secret from the konces (keys used - once, single use elliptic points), and uses it to decrypt the clien - durable public key, verifying that the client does indeed know the - transient scalar. If the client durable key is OK, sign on allowed, it - constructs a shared secret from all four keys, the sum of two secrets - multiplying the sum of two elliptic points, and we now have an - encrypted stream associated with the port number and network addresses. - -# Summary of the replacement - -Thus we can integrate TCP handshake and encryption hand shake and the -innumerable DDoS protection handshakes “Cloudfare is checking your browser, -oops, your browser did not pass, here is a captcha” at the cost of one single -additional trip, half a round trip. - -Instead of the person establishing the connection fuming while round trip -after round trip goes through, we get all that stuff at the cost of one -additional half round trip. - -# messages, not streams - -TCP sockets are designed for synchronous procedural programming, on -machines with very limited memory processing limitless streams. They are -now almost always used for message processing from event oriented -asynchronous code, with a messaging layer on top of the endless stream -layer. The replacement needs to have application layer sending messages -and receiving messages in events. The application layer should not have -to deal with sockets and streams. Rather, it sends a message to destination -identified by its durable public key, and gets a reply, where the reply -might be that the socket could not be opened, or that the socket was open but -the reply timed out, among other things. When sending a message, there is a -time to wait for response before giving up, and a time for the socket that -may be created to live idle. - -# Proposed replacement - -[QUIC] is the current TCP replacement. Also known as HTTP/3 - -[QUIC]: https://github.com/private-octopus/picoquic - -We have no alternative but to interface to the vast HTTP/2 HTTP/3 -ecosystem. The wallet is going to have to talk as a client to legacy server -http/3 devices, and accept their CA certificates, preferably subject to -Zooko scrutiny, and legacy http/3 client devices are going to have to talk -to our wallet (after their wallet has downloaded a zooko based certificate -from the server wallet). - -Talking HTTP/3 means being wide open to DDOS attack, so that you are -forced to use cloudfare. When a device with our version of QUIC talks to -another device with our version of QUIC, it has to implement our DDOS -resistance, and Zooko in place of CA. But when it talks to a legacy -HTTP/3 device, it has to lay itself wide open to DDOS attack and CA -interception. - -Backwards compatibility with insecure systems always creates a massive -security hole. On the one hand, every build from scratch project dies. On -the gripping hand, every attempt to do fax over the internet failed and was -eventually replaced by pdf attachments to email. Backwards compatibility -was simply too crippling, and backwards compatibility with QUIC is -going to cripple security. - -Instead of putting the secure system transparently as an alternate protocol -within the insecure system, you non transparently put the insecure system -as a downgrade protocol within the secure system, which means our -version of QUIC simply is not going to talk to older versions of QUIC -unless you take some special measures to tell it to do so or enable it to do -so for that particular communication end point. - -The least friction interface would be that every time a new SSL name is -encountered, we get a window saying "This authority claims that this is -this entity. Trust this authority for this entity?" And if there is a change of -authority, complain. Wrap backwards compatibility in Zooko vouched -certificates, pinned certificates, and the CAA record indicating who is the -right issuer for the SSL certificate - -We have to have downgrade capability, but it has to be an afterthought, -slipped in as a special path and special case, as user friendly as possible, -but no friendlier. - -QUIC's one way streams are messages. - -Its two way streams are backwards compatibility with TCP - -It solves the long fat pipe problem with flexible window size. - -It puts multiple objects and messages in one stream, so that one message -does not have to wait for lost packets in another message to be resolved. - -TCP flow control is constructed around pushback - that the sender should -not send data faster than the receiver is able and willing to handle it. -Normally there is one thread, or pool of of threads, handling the data -received. To prevent DDoS, we should probably only have one unit of -pushback per pair of network addresses. If someone has a slow receiver -thread pool, and a fast receiver thread pool communicating with the same -machine, he needs to break the slow receiver communication into lots of -small requests and replies, hence one channel per pair of network -addresses. - -Quic implements everything you need to have one channel per pair of -network addresses, multiplexing many request-replies into a single stream, -many channels in one channel, but does not in fact implement one channel -per pair of network addresses in the sense of one unit of packet flow -control and one unit of DDoS monitoring, per pair of network addresses. - -Finer grained flow control should be implemented as request reply on -messages that may well be much larger than a packet, but much smaller than -memory - -In the request reply model, if the requests and replies are reasonably short, -pushback does not matter, and becomes a representation of flow control. It -is seldom sane to download enormous blocks of data as a single message, -and we probably just should not do it - restrict replies to what can -reasonably fit into memory, so that a very large message that the receiver -is processing one chunk at a time has to get acks of its submessages, -separate from the flow control system. - -What the LEMP stack does with request headers is dynamically allocate -8KiB buffers, stuff headers into a part or whole of at 8KiB buffer, and if a -header is bigger than 8KiB, arbitrarily truncates it, which suggests that this -is a tactic to minimize the overheads of dynamically allocating many -moderate sized buffers of variable size. Experimenting, I find that -dynamic allocation tends to be the major cost in many programs, but if -you do it LEMP style, dynamic allocation is unlikely to be a significant cost. - -QUIC has a pile of feature bloat: - -+ The push feature is married to html, and belongs in the webserver - and the browser, not in the protocol. Something sending a request - message should be aware it might have several messages in reply, - depending on the kind of the request, and simply have a message - handler that can deal with many messages. - -+ We don’t really need the unique and sequential message id if finding and - interpreting the message id is part of how to response handler handles the - messages – best to hand that as far down into the endpoints as possible. - -+ its data format, header and frames, is married to html, which is - always sending repetitious and redundant information, treating - related fragments of html as absolutely distinct. - it implements html specific compression, HPACK. - -It suffers from the SSL/TLS problem of a thousand CA authorities, NSA -friendly encryption, and, being funded in large part by Cloudfare, has no -substantial defense against DDoS. - -It fails to support rendezvous routing. - -But, it has already struggled with and solved a thousand problems whose -solutions I have been confusedly struggling with. So the obvious solution -is to adopt Quic, rip out the domain name system, add DDoS resistance, -rip out NSA friendly encryption in favour of the standard and -recommended Libsodium packet encryption. (XChaCha20-Poly1305), for -immortality rip out the 62 bit compressed integers in favour of unlimited -precision windowed integers (With a negotiated limit on precision that -will in practice always be 64 bits for the next several centuries.) - -XChaCha20 is not the fastest on a long stream, but it has key agility, can -encrypt arbitrary length values, including a single bit, and is as -fast as ChaCha20 without any limits on the nonce. - -Quic’s messaging is excessively married to HTTP. We need a generic -messaging system where every message has an short number indicating -destination handler, and you can generate a handler, code continuation, -and get number assigned to it on the fly, so that you can send a message, -and the reply goes to your code continuation. - -We need to lift as much of the [QUIC] design as possible, and also make things -act much like TCP, so that existing NATs will not notice anything has -changed. Thus packets will continue to be sent to and from a widely known -port that is usually below 1024 on the server, from a random port on the -client in the range 49152--65535. A connection will continue to require a -three phase handshake which creates a socket, albeit our sockets will be very -different. - -With a rendezvous, both peers will use the same socket in the range -1024-49151 - -The rendezvous handshake will look like the TCP handshake Syn Syn-Ack Ack, -but they will both send syn packets, both send syn-ack packets, and both -send ack packets. Their syn packets will be timed so that, if the timing -is done right, both are sent just before the other peer’s packet is -expected to be received. - -Our sockets will always have a shared secret associated, which proves -identity and enables encrypted communication, but which cannot be used to -prove identity to a third party. The initial handshake will exchange -transient secret keys, which will generate a transient durable secret, -which is used to encrypt the exchange of durable secret keys, which -establish a shared secret based on the both the durable and transient key, -establishing forward secrecy, and failing to establish identity to third -parties. - -Since setting up a shared secret is costly, this creates the opportunity to -syn flood attacks, therefore the syn-ack will always be a syn cookie, -structured rather like existing syn cookies, a cryptographic hash of the syn -based on an unshared secret known only to the server, plus it will always -have a proof of work request, which may be zero, and it will have a list of -supported protocols if the protocol proposed in the initial syn cookie is -unacceptable. The proof of work will be that the hash of the client ack -must have a certain number of zeros, and the ack -must contain the cryptographic cookie, and the data that the server checks -the cookie against. - -TCP was designed around the case of the client sending an endless stream of -characters, typed with one finger, to a program on the server. We are -going to design around message response, with responses not necessarily -returning in order. - -The client sends a message from a durable public key to a to a durable -public key. The creation and destruction of such connections is not -tightly linked to messaging. If connection exists, it is used. If it does -not exist, it is created. It may be torn down after a while of being -unused, but the tear down is not tightly linked to message completion - -In TCP a count is kept of bytes sent and bytes received, with an ack -counting as one byte. - -We need a count for each packet, since packets can arrive out of order, -repeated, or missing. The count values will be sequential nonces for the -encryption, and will start at one. As the count can potentially grow -quite large, the count value will be windowed, but, unlike TCP, the -windowed count represents a potentially much larger absolute count known -by both ends. - -Negotiating a window size is hard, since you do not really know in advance -what window size will be needed. The thirty two bit window is adequate for -all normal uses, but fails in special and important uses. - -We will specify the window size in each packet, with the high order bit of -each byte in the nonce indicating whether there is another seven bits in -the nonce window, so that we can dynamically adjust the window size. We -dynamically adjust the window size to big enough to exclude ambiguity. -Which for the first 128 packets, and on a connection that is not very busy, -all packets, will be seven windowed count bits and one window size bit. - -The window needs to be large enough to exclude the ambiguity of delayed -and duplicated packets wandering in late, so has to be several times -larger than the difference between the most recently acked value, and the -the value that will fill the reception window. Thirty two times larger -should be ample. At the start, there are no early packets capable of -wandering in late, so big enough to hold the full count always suffices. - -If `a` represents a recent nonce, `n` -represents the nonce, `w` represents the windowed nonce. and -`M` represents the window mask, communicated in each packet in -unary, then: - -`w = n&M` - -`n = (w − a)&M + a` - -We use a window large enough to give the same answer on both the most -recently acked nonce, and the most recently sent nonce. - -The nonce will serve the dual purpose of enabling the decryption of each -packet, and flow control. Each packet has a sequential nonce, we make sure -all packets are acked. Nonces on packets coming from the client refer to a -different shared secret than nonces on packets coming from - -## API - -To send a message, you will construct a response handler if you are -expecting a response, and then call the api with a network address, a -public key of the recipient, an identifying secret key and public key of -the sender, a timeout for attempting to connect, and flags permitting for -direct connection, rendezvous connection, retransmit, and store and -forward. If a response is expected for the message, give the expected -lifetime for the response handler, a nonce for the response handler and a -class identifier for the nonce. (the nonce only has to be unique within -the class). You will probably use a different nonce population for -messages that have to be handled promptly, messages that have to be -handled within a session, and non volatile nonces that survive between -sessions. Nonce populations can be windowed per class identifier, with a -window large enough to accommodate the timeout, and a different class -identifier for volatile and non volatile nonces. The nonce is used once -within a window and within a class, but can be re-used in another class -and another window. - -The application code is event oriented, like gui code. It is driven by a -message pump, with constructors creating event handlers, and the events -driving the event handler through the message pump, and event handler, on -being fired, creates new event handlers and fires old event handlers. - -When the application needs to perform a task that spans many events, it does -not call `yield` or `await,` but instead the event handler for each event -constructs or enables the next event handler. If it needs to push information -onto a stack between events, has its own explicit stack for its own multi -event task, or creates a linked list of event handlers. Non volatile event -handlers must be trivial C+ classes, therefore cannot contain an `std::stack`, - -State that would be on the stack in synchronous code is in the event -handler in asynchronous code. This potentially gets messy if you are -processing an endless stream of structured data whose structure is -orthogonal to message boundaries. Since we allow arbitrary length -messages, don’t do that. - -Notification of message failure may occur any time within the lifetime of -the response handler, but will mostly happen within the timeout for -attempting to connect. - -The usual flow of control will be create an event handler, assign a nonce -to it (fire it) and then it gets triggered when the event actually -happens, and is then usually destroyed. Events will usually create and -fire new events and trigger events that existed before they were created, -rather than changing their state. - -Below the api, additional messages, using low numbered message response -classes, may be constructed for encryption and flow control. If an -encrypted connection exists, it will use that without constructing -additional messages. If it does not exist, will construct it. - -Constructing a encrypted connection provides perfect forward secrecy -between one connection and the next by generate new random session keys -each time. - -## Reliability and flow control - -TCP achieves reliable transmission with acks and nacks. - -The original design simply acked that all bytes (not exactly bytes, because -acks and nacks are counted) had been received up to a certain byte. If the -transmitter has transmitted stuff, and not received an ack for what it -transmitted it sends a nack, after a timeout. The receiver may resend acks. - -This mechanism worked fine on short thin pipes, but if you have a million -packets in flight, and packet three hundred thousand gets lost, you then -then have to send seven hundred thousand to replace one packet. So the -duplicate ack possibility was tortured to create a half assed version of -selective acknowledgment. If the receiver receives packet 100, and 101, -but not packet 99, it sends duplicate acks for packet 98. If the receiver -receives three duplicate acks for packet 98, it retransmits packet 99. (two -duplicate acks could be just the normal randomness.) - -[QUIC], however, has fix for this built in. - -Obviously true selective acknowledgment is better. The receiver acks the -most recent received packet, and sends a list of missing packets prior to -this (acks a windowed value for the most recent packet, and the difference -between packet nonces for missing packets) The sender resends the missing -packets, except for the most recent missing packets. If they are still -missing, they will be caught on the next ack. - -In each ack, the receiver tells the sender how much more data it can -receive before it sends the next ack. This prevents the receiver from -being flooded, but a more common problem is the pipe being flooded. - -To handle pipe flooding, the sender has a timer. If it sends stuff, and -does not get an ack, it backs off, it sets the timer to a slower rate, and -retransmits with a nack. The initial value of the timer is the initial -timer value is smoothed $RTT + max(G,4*RTT variance)$ - -TCP flow control focuses on getting a segment complete and acknowledged, -so it can move on to the next segments. It may have a great many packets -in flight, but does not have too many segments in flight. The backoff -algorithm is linked with the push segments algorithm. You only push the -segment the receiver has asked for in his previous acknowledgment. So you -typically have the segment you are finalizing, the segment that is in -flight, and the segment that the receiver asked for. - -The algorithm is that the sender gets an ack that acknowledges what the -receiver has received, and tells the sender how much more the receiver can -receive. Whereupon the sender resends anything missing, and resumes pushing -new stuff up to the limit that the receiver has specified, spread out -roughly evenly over the timer period. Which implies that the receiver -should ask wisely, as well as the sender send wisely. - -Implementing our own flow control sounds like a lot of work. Need to lift -[QUIC]’s flow control, and drop our own encryption and attack resistance -into it, while letting it worry about flow control. I can hack into its library, -while I cannot hack into the TCP library. - -I have been analysing how TCP works, with a view to what needs fixing. Time to -analyse how something works for which I have a library and example code. - -Best (because smallest and least married to HTTP3) is [picoquic]. - -[picoquic]: https://github.com/private-octopus/picoquic - -The TCP state machine assumes that the server opens a connection on receiving -a syn, sends an ack-syn to the client, whereupon the client acks the -connection. But if we are using syn cookies, we are using a different state -machine, where the connection is in fact only opened on receiving the server -syn-ack cookie in the client ack. So the server has to acknowledge the -connection, which would make it a four step handshake instead of a three step -handshake. To avoid this, we have a rule that the client only opens a -connection when it has data ready to send. It then gets a server cookie, and -sends the cookie-ack with some data, which data the server acks. - -With the cookie ack, we get a round trip time and offset between server -steady time and client steady time. If we see unstable round trip times, -we suspect the pipe is overloaded, and back off our estimate of max -bandwidth. For flow control, we maintain an estimate of pipe length and -width. Sudden pipe widenings indicate an overflow condition, because pipes -may respond to overflow by massively discarding packets, or massively -backing up packets, or quite possibly both. We maintain a probability -estimate of the pipe behaviour. - -## Outline protocol - -A packet protocol that establishes an encrypted connection on top of -unreliable packets with minimal round trips without increasing fragility to -DoS. - -For servers, public keys, globally human readable names, the key owning the -name, and the temporary key signed by the key owning the name, will usually -be public and widely known, but this also supports the case of -communication where this information is only known to the parties, and the -server does not want to make the connection between a network address and a -public key widely known. - -To establish a connection, we need to set a bunch of values specific to -this particular channel, and also create a shared secret that -eavesdroppers and active attackers cannot discover. - -The client is the part that initiates the communication, the server is -the party that responds. - -I assume a mode that provides both authentication and encryption – if a -packet decrypts into a valid message, this shows it originated from an -entity possessing the shared secret. This does not provide signing – the -recipient cannot prove to a third party that he received it, rather than -making it up. - -For the moment I ignore the hard question of server key distribution, -glibly invoking Zooko’s triangle without proposing an implementation of -the other two points and three sides of the triangle or a solution to the -problem of managing distributed reputations in Zooko’s triangle.  (Be -warned that whenever people charge ahead without solving the key -distribution problem, the result is a disaster.) - -Client 🠆 Server: Equivalent to the syn of the three phase TCP -handshake. - -> Client’s network address and port on which client will receive -> packets, protocol identifier, and client steady time that the -> message was sent. - -If the requested protocol is not OK, we go into protocol negotiation, -server responds with a list of protocols and protocol versions that it will -accept, in the form of a list of lists of numbers. - -Assuming it is OK, which it probably will be, server allocates nothing, -prepares nothing, but sends the equivalent of a TCP ack-syn cookie, -containing, among other things, a cryptographic hash of the information -that was received and sent, based on a private secret known only to the -server. It sends a transient public key, which changes every few minutes -or so, plus a short windowed id for that transient public key, and a demand -for proof of work, which may be zero. The proof of work is that the -client’s ack, equivalent of the third phase of the TCP handshake, has to -hash to a value ending in `n` zero bits, where `n` -may be zero. - -This cryptographic hash based on an unshared secret will be sent to client, -and then back to server, unchanged. Its function is to avoid the necessity for - the server to allocate memory or perform asymmetric cryptographic operations -for a client that has not yet validated. Instead the state information is sent - back and forth. - -1. Server 🠆 Client: Equivalent to the syn-ack of the three phase TCP handshake. - - Cryptographic hash based on unshared secret, server steady time, - transient public key, server windowed identifier of server transient - public key, proof of work demand, and any channel parameters. - - The proof of work is trivial if the server is not under load, but is - increased as the server load approaches the maximum the server is - capable of, in order to throttle demand. - - Client computes transient handshake shared secret as its transient private - key times the server shared transient public key. It returns in the clear - a copy of the cryptographic hash that the server sent to it, the data in - the clear needed to validate the hash, performs the proof of work, and - sends its public key, which may be a per server durable public key, always - used when accessing this server on this identity, encrypted using the - transient key, and the public key it wants to talk to on the server. - - Subsequent information is not encrypted using the transient keys, but using - the sum of transient plus secret keys. - - This implies that the client has to know the public key that the server is - using, which may be a key signed by the master public key that owns the - name authorizing that new key, which key changes about as often as the - server IP changes, and is therefore distributed in the same channel as the - network address associated with global human names is distributed. If the - client gets it wrong, then the server ignores the information encrypted to - the wrong public key, and responds with the authentication of its new - public key, signed by the master public key of its globally unique name, - encrypted using the transient secret – this is usually public information, - but since by this point we have established a shared secret and allocated - memory, might as well send it securely, for sometimes it is going to be - private information. - -1. Client 🠆 Server: Equivalent to the final ack of the three phase TCP -handshake. - - Sends in the clear server hash as received, any data needed to - reconstruct the hash, and transient secret key. Then, encrypted to - transient keys, the hash of the identifier of the public key it wants to - talk to, its durable public key, and client steady time at which this was - sent, so that both sides have an estimate of the round trip time and the - offset between server steady time and client steady time. - - Server checks the proof of work, checks the cryptographic hash against the - data in the clear, *then* creates an entry in its hash table for this - connection, with the shared secret being the transient keys plus the public - keys. - -We have two protocols, one for the authenticated phase, and one for -unauthenticated phase. The client has to know one of the unauthenticated -protocols offered by the server, or else protocol negotiation will fail in -the abnormal case that protocol negotiation is needed. Normally there will -only be one protocol for secured but unauthenticated communication during -setup, but we make provision by having two protocols, trivially different, -and three protocols, trivially different for the authenticated phase. - -You will notice that the server only allocates memory and and asymmetric -encryption computation *after* the client has successfully performed proof of -work and shown that it is indeed capable of receiving data sent to the -advertised network address. - -In the normal case, the client requests one way authenticated encryption in -the syn, where the server authenticates but the server does not, and the -server may, and usually will, offer in the syn-ack only two way -authenticated encryption, where the client provides an identity unique to -that server and user’s current default name, but which cannot be used to -identify the default name, nor the same user accessing a different -website. This allows the server to see that the same user is accessing -different resources, how many uniques the server has, and what each unique -is doing, but does not enable the server’s to put their heads together and -see that the same user is doing things on one server, and also on another -server. - -Now we have a shared secret, protocol negotiated, client logged in, in -one round trip plus the third one way trip carrying the actual data – the -same number of round trips as when setting up an unencrypted -unauthenticated TCP connection. - -You will notice there is no explicit step checking that both have the -same shared secret – This is because we assume that each packet sent is -also authenticated by the shared secret, so if they do not have the same -secret, nothing will authenticate. - -# Critiques of TCP/SSL - -Does the job so badly that using a different method is just as plausible. -People fight to avoid TLS already, they’d rather send stuff in the clear if -they could.  So just solve the problems they have. - -In Web Services we frequently require message layer security in addition to -transport layer security because a Web Service transaction might involve more -than two endpoints and messages that are stored and forwarded etc. This is why -WS-\* is not TLS. (It is unfortunately horribly baroque but that was not my -doing). - -Problem that occurred with TLS was that there was an assumption that the job\ -was to secure the reliable stream connection mechanics of TCP.  False -assumption. - -Pretty much nobody uses streams by design, they use datagrams.  And they use -them in a particular fashion: request-response.  Where we went wrong with TCP -was that this was the easiest way to handle the mechanics of getting the -response back to the agent that sent the request. Without TCP, one had to deal -with the raw incoming datagrams and allocate them to the different sending -agents. - -A second problem was that the design was too intertwined with commercial PKI -so certs were hung on the side as a millstone for server authentication and -discarded as client side, leaving passwords to fill that gap.  A mess, which -is an opportunity for redesign, frequently exploited by many designs already. - -SSL came at this and built a message (record) interface on top of TCP (because -that was convenient for defining a crypto layer), and then a (mainly) stream -interface on top of its message interface – because programmers were by now -familiar with streams, not records. - -And so … here we are.  Living in a city built on top of generations of -older cities.  Dig down and see the accreted layers. - -What *is* the “right” (easiest to use correctly, hardest to use -incorrectly, with good performance, across a large number of distinct -application APIs) underlying interface for a secure network link? The fact -that the first thing pretty much all APIs do is create a message structure -on top of TCP makes it clear that “pure stream” isn’t it.  Record-oriented -designs derived from 80-column punch cards are unlikely to be the answer -either.  What a “clean slate” interface would look like is an interesting -question, and perhaps it’s finally time to explore it. - -# General and unorganized comments - -µTP, Micro Transport Protocol is a Bittorrent near drop in replacement for TCP -that provides lower priority bulk downloads in the background. The library is -not well documented, (header file plus examples) but as far as I can see, -provides a reasonably clean separation between Bittorrent and the transport -mechanism. - -Google has a TCP/SSL replacement, [QUIC], which avoids round tripping and -renegotiation by integrating the security layer with the reliability layer, -and by supporting multiple asynchronous streams within a stream - -Layering a new peer-to-peer packet network over the Internet is simply -what the Internet is designed for. UDP is broken in a few ways, but not -that can’t be fixed. It’s simply a matter of time before a new virtual -packet layer is deployed – probably one in which authentication and -encryption are inherent. - -For authentication and encryption to be inherent, needs to connect -between public keys, needs to be based on Zooko’s triangle.  Also -needs to penetrate firewalls, and do protocol negotiation with an -unlimited number of possible protocols – avoiding that internet names and -numbers authority. - -Ian Grigg “Good protocols divide into two parts, the first of which says -to the second, trust this key completely!”. - -This might well be the basis of a better problem factorization than the -layer factorization – divide the task by the way trust is embodied, rather -than the basis of layered communication. - -Trust is an application level issue, not a communication layer issue, -but neither do we want each application to roll its own trust cryptography -– which at present web servers are forced to do. (Insert my standard rant -against SSL/TLS). - -Most web servers are vulnerable to attacks akin to session cookie -fixation attack, because each web page reinvents session cookie handling, -and even experts in cryptography are apt to get it wrong. - -The correct procedure is to generate and issue a strongly unguessable -random https only cookie on successful login, representing the fact that -the possessor of this cookie has proven his association with a particular -database record, but very few people, including very few experts in -cryptography, actually do it this way. Association between a client -request and a database record needs to be part of the security system. It -should not something each web page developer is expected to build on top -of the security system. - -TCP constructs a reliable pipeline stream connection out of unreliable -packet connections. - -There are a bunch of problems with TCP.  No provision was made for -protocol negotiation and so any upgrade has to be fully backwards -compatible.  A number of fixes have been made, for example the long -fat pipe problem has been fixed by window size negotiation, which is semi -incompatible and leads to flaky behaviour with old style routers, but the -transaction problem remains intolerable.  The transaction problem has -been reduced by protocol level workarounds, such as “Keep alive” for HTTP, -but these are not entirely satisfactory.  The fix for syn flooding -works, but causes some minor unnecessary degradation of performance under -syn flood attacks, because the syn cookie is limited to 48 bits – needs to -be 128 bits both to deal with the syn flood attack, and to prevent TCP -hijacking. - -TCP is inefficient over wireless, because interference problems are -rather different to those provided for in the TCP model.  This -problem is pretty much insoluble because of the lack of protocol -negotiation. - -There are cases intermediate between TCP and UDP, which require -different balances of timeliness, reliability, streaming, and record -boundary distinction. DCCP and SCTP have been introduced to deal with -these intermediate cases, SCTP for when one has many independent -transactions running over a single connection, and DCCP for data where -time sensitivity matters more than reliability such as voice over -IP.  SCTP would have been better for HTML and HTTP than TCP is, -though it is a bit difficult to change now.  Problems such as -password-authenticated key agreement transaction to a banking site require -something that resembles encrypted SCTP, analogous to the way that TLS is -encrypted TCP, but nothing like that exists as yet. Standards exist for -encrypted DCCP, though I think the standards are unsatisfactory and -suspect that each vendor will implement his own incompatible version, each -of which will claim to conform to the standard. - -But a new threat has arrived:  TCP man in the middle forgery. - -Connection providers, such as Comcast, frequently sell more bandwidth -than they can deliver.  To curtail customer demands, they forge -connection shutdown packets (reset packets), to make it appear that the -nodes are misbehaving, when in fact it is the connection between nodes, -the connection that Comcast provides, that is misbehaving. Similarly, the -great firewall of China forges reset packets when Chinese connect to web -sites that contain information that the Chinese government does not -approve of. Not only does the Chinese government censor, but it is able to -use a mechanism that conceals the fact of censorship. - -The solution to all these problems is to have protocol negotiation, -standard encryption, and flow control inside the encryption. - -A problem with the OSI Layer model is that as one piles one layer on top -of another, one is apt to get redundant round trips. - -According to [google research] 400 -milliseconds reduces usage by 0.76%, or roughly two percent per second of delay. - -[google research]: http://googleresearch.blogspot.com/2009/06/speed-matters.html - -Redundant round trips become an ever more serious problem as bandwidths -and processor speeds increase, but round trip times reminds constant, -indeed increase as we become increasingly global and increasingly rely on -space based communications. - -Used to be that the biggest problem with encryption was the asymmetric -encryption calculations – the PKI model has lots and lots of redundant and -excessive asymmetric encryptions. It also has lots and lots of redundant -round trips. Now that we can use the NVIDIA GPU with CUDA as a very high -speed cheap massively parallel cryptographic coprocessor, excessive PKI -calculations should become less of a problem, but excess round trips are -an ever increasing problem. - -Any significant authentication and encryption overhead will result in -people being too clever by half, and only using encryption and -authentication where it is needed, with the result that they invariably -screw up and fail to use it where it is needed – for example the login on -the http page. So we have to lower the cost of encrypted authenticated -communications, so that people can simply encrypt and authenticate -everything without needing to think about it. - -To get stuff right, we have to ditch the OSI layer model, but simply -ditching it without replacement will result in problems. It exists for a -reason, and we have to replace it with something else. diff --git a/docs/rootDocs/README.md b/docs/rootDocs/README.md index d54f3ce..2657d86 100644 --- a/docs/rootDocs/README.md +++ b/docs/rootDocs/README.md @@ -55,6 +55,8 @@ entryists and shills, who seek to introduce backdoors. This may be inconvenient if you do not have `gpg` installed and set up. +It also means that subsequent pulls and merges will require you to have `gpg `trust the key `public_key.gpg`, and if you submit a pull request, the puller will need to trust your `gpg` public key. + `.gitconfig` adds several git aliases: 1. `git lg` to display the gpg trust information for the last four commits. diff --git a/docs/set_up_build_environments.md b/docs/set_up_build_environments.md index e1628a3..925ad0b 100644 --- a/docs/set_up_build_environments.md +++ b/docs/set_up_build_environments.md @@ -49,7 +49,7 @@ To install guest additions on Debian: ```bash su -l root -apt-get -qy update && apt-get -qy install build-essential module-assistant git dialog rsync +apt-get -qy update && apt-get -qy install build-essential module-assistant git sudo dialog rsync apt-get -qy full-upgrade m-a -qi prepare mount -t iso9660 /dev/sr0 /media/cdrom @@ -3104,8 +3104,31 @@ Under Mate and KDE Plasma, bitcoin implements run-on-login by generating a It does not, however, place the `bitcoin.desktop` file in any of the expected other places. Should be in `/usr/share/applications` -with its `Categories=` entry set to whatever Wasabi sets its -`Categories=` entry to. + +The following works + +```config +$ cat ~/.local/share/applications/bitcoin.desktop +[Desktop Entry] +Type=Application +Name=Bitcoin +Exec=/home/cherry/bitcoin-22.0/bin/bitcoin-qt -min -chain=main +GenericName=Bitcoin core peer +Comment=Bitcoin core peer. +Icon=/home/cherry/bitcoin-22.0/bin/bitcoin-qt +Categories=Office;Finance +Terminal=false +Keywords=bitcoin;crypto;blockchain;qwe;asd; +Hidden=false + +cat ~/.config/autostart/bitcoin.desktop +[Desktop Entry] +Type=Application +Name=Bitcoin +Exec=/home/cherry/bitcoin-22.0/bin/bitcoin-qt -min -chain=main +Terminal=false +Hidden=false +``` Under Mate and KDE Plasma, bitcoin stores its configuration data in `~/.config/Bitcoin/Bitcoin-Qt.conf`, rather than in `~/.bitcoin` diff --git a/docs/social_networking.md b/docs/social_networking.md index b319b51..8aab0e7 100644 --- a/docs/social_networking.md +++ b/docs/social_networking.md @@ -16,7 +16,7 @@ We also have a crisis of shills, spamming, and scamming. [lengthy battleground report]: images/anon_report_from_people_who_tried_to_keep_unmoderated_discussion_viable.webp "anon report_from people who tried to keep unmoderated discussion viable" -{target="blank"} +{target="_blank"} Here is a [lengthy battleground report] from some people who were on the front lines in that battle, and stuck it out a good deal longer than I did. @@ -825,6 +825,104 @@ to the stake. # Many sovereign corporations on the blockchain +We have to do something about the enemy controlling speech. No +namefag can mention or acknowledge any important truth, as more +and more things, even in science, technology, and mathematics, +become political. Global warming and recent horrifying ineffective +and dangerous vaccinations are just the tip of the iceberg – every +aspect of reality is being politicized. Our capability to monitor +reality is being rapidly and massively eroded. + +Among those capabilities, are bookkeeping and accounting, which +is becoming Environmental, Social, and Governance and is +increasingly detached from reality. The latter is an area of truth that +can get us paid for securing the capability to communicate truth. +Information wants to be free, but programmers want to be paid. + +Increasingly, the value of shares is not physical things, but "goodwill" + +Domino’s does not sell pizzas, and Apple does not sell computers. It +sets standards, and sells the expectation that stuff sold with its +brand name will conform to expectations. Domino's does not make the +pizza dough, does not make the pizzas. It sells the brand. + +The latest, and one of the biggest, jewels in Apple’s tech crown, at +the time of writing, is the M1 chip. Which is *designed* by Apple. It is +not *built* by Apple. And similarly if you buy a Domino’s pizza it was +cooked according to a Domino’s recipe from Domino’s approved +ingredients. But it was not cooked in a Domino’s owned oven, was +not cooked by a Domino’s employee, and it is unlikely that any of +the ingredients where ever anywhere near Domino’s owned +physical property or a Domino’s direct employee. Domino's does +not cook pizzas, and Apple does not build computers it. It designs +computers and set standards. + +Most businesses are in practice distributed over a network, and +their capital is less and less big centralized physical things like steel +refineries that can easily be found and coerced, and more and more +“goodwill”. “Goodwill” being the network of relationships and social +roles within the corporation and between its customers and +suppliers, and customer and supplier expectations of employee +roles enforced by the corporation. *This*, we can move to the +blockchain and protect from governments. + +A huge amount of what matters, a major proportion of the value +represented by shares, is in the social network. Which is +increasingly, like Apple and Google, scarcely attached to anything +physical and coercible, and is getting less and less attached. + +It mostly information, which is a present organized in a highly +coercible form. It does not have to be. + +There are a whole lot of government hooks into the corporation +associated with physical things, but as more and more capital takes +the form of “goodwill”, the most important hooks in the Global +American Empire are Human Resources and Accounting. + +We have to attach employee roles and brand names to individually +held unshared secrets, derived from a master secret of a master +wallet, rather than the domain name system. + +With SSL, governments can, and routinely do, seize that name, but +that is a flaw in the SSL design, the sort of thing blockchains were +originally invented to prevent. The original concept of an +immutable append only data structure, what we now call, not very +accurately, a blockchain, was invented to address this flaw in SSL, +though its first widely used useful application was bitcoin. It is now +way past time to apply it to its original design purpose. + +These days, most of the wealth of the world is no longer in physical +things, and increasingly companies are outsourcing that stuff to +large numbers of smaller businessmen, each of whom owns some +particular physical thing that is directly under his physical control, +and who are therefore hard for the state to coerce. One is more +likely to get shot when one attempts to coerce the actual owner +who is physically present on his property than when coercing his +employee who is supervising and guarding his property. And who +can switch from Dominoes at the cost of taking down one sign and +putting up another. (Also at the cost of confusing his customers.) +Uber does not own any taxis, nor move any passengers, and +Dominoes bakes no pizzas. Trucking companies are converging to +the Uber model. Reflect on the huge revolutionary problem the +Allende government got when attempting to coerce large numbers +of truckers who each owned their own truck. The coup was in large + part the army deciding it was easier and less disturbing to do + something about Allende than to do something about truckers. One + trucker is no problem. Many truckers are a problem. One Allende … + + The value of a business is largely the value of its social net of suppliers, customers, and employee roles. Our job is protecting social nets. Among them, social nets who will pay us. + + And with Information Epoch warfare looming, the surviving + governments will likely be those that are good at protecting their + social graph information from enemies internal and external, who + will enjoy ever increasing capability to reach out and kill one + particular man a thousand miles away. Though governments are + unlikely to pay us. They are going to try to make us pay them. And + in the end, we probably will, in return for safe locations where we + have freedom to operate. Which we will probably lease from + sovereign corporations who leased the physical facilities from + small owners, and the freedom to operate from governments. + ## source of corporateness State incorporated corporations derive their corporateness from the diff --git a/docs/tim_may_on_bitcoin.html b/docs/tim_may_on_bitcoin.html deleted file mode 100644 index d8fffa4..0000000 --- a/docs/tim_may_on_bitcoin.html +++ /dev/null @@ -1,145 +0,0 @@ - - - - - - - Enough with the ICO-Me-So-Horny-Get-Rich-Quick-Lambo Crypto - - -

To Home page

-

Enough with the ICO-Me-So-Horny-Get-Rich-Quick-Lambo Crypto

-

CoinDesk asked cypherpunk legend Timothy May, author of the “Crypto Anarchist Manifesto,” to write his thoughts on the bitcoin white paper on its 10th anniversary. What he sent back was a sprawling 30-page evisceration of a technology industry he feels is untethered from reality.

-

The original message is presented here as a fictional Q&A for clarity. The message remains otherwise unchanged. Read more in our White Paper Reflections series.

-
-

CoinDesk: Now that bitcoin has entered the history books, how do you feel the white paper fits in the pantheon of financial cryptography advances?

-

Tim: First, I’ll say I’ve been following, with some interest, some amusement and a lot of frustration for the past 10 years, the public situation with bitcoin and all of the related variants.

-

In the pantheon, it deserves a front-rank place, perhaps the most important development since the invention of double-entry book-keeping.

-

I can’t speak for what Satoshi intended, but I sure don’t think it involved bitcoin exchanges that have draconian rules about KYC, AML, passports, freezes on accounts and laws about reporting “suspicious activity” to the local secret police. There’s a real possibility that all the noise about “governance,” “regulation” and “blockchain” will effectively create a surveillance state, a dossier society.

-

I think Satoshi would barf. Or at least work on a replacement for bitcoin as he first described it in 2008-2009. I cannot give a ringing endorsement to where we are, or generate a puff-piece about the great things already done.

-

Sure, bitcoin and its variants – a couple of forks and many altcoin variants – more or less work the way it was originally intended. Bitcoin can be bought or mined, can be sent in various fast ways, small fees paid and recipients get bitcoin and it can be sold in tens of minutes, sometimes even faster.

-

No permission is needed for this, no centralized agents, not even any trust amongst the parties. And bitcoin can be acquired and then saved for many years.

-

But this tsunami that swept the financial world has also left a lot of confusion and carnage behind. Detritus of the knowledge-quake, failed experiments, Schumpeter’s “creative destructionism.” It’s not really ready for primetime. Would anyone expect their mother to “download the latest client from Github, compile on one of these platforms, use the Terminal to reset these parameters?”

-

What I see is losses of hundred of millions in some programming screw-ups, thefts, frauds, initial coin offerings (ICOs) based on flaky ideas, flaky programming and too few talented people to pull off ambitious plans.

-

Sorry if this ruins the narrative, but I think the narrative is fucked. Satoshi did a brilliant thing, but the story is far from over. She/he/it even acknowledged this, that the bitcoin version in 2008 was not some final answer received from the gods..

-

CoinDesk: Do you think others in the cypherpunk community share your views? What do you think is creating interest in the industry, or killing it off?
-

-

Tim: Frankly, the newness in the Satoshi white paper (and then the early uses for things like Silk Road) is what drew many to the bitcoin world. If the project had been about a “regulatory-compliant,” “banking-friendly” thing, then interest would’ve been small. (In fact, there were some yawn-inducing electronic transfer projects going back a long time. “SET,” for Secure Electronic Transfer, was one such mind-numbingly-boring projects.)

-

It had no interesting innovations and was 99 percent legalese. Cypherpunks ignored it.

-

It’s true that some of us were there when things in the “financial cryptography” arena really started to get rolling. Except for some of the work by David Chaum, Stu Haber, Scott Stornetta, and a few others, most academic cryptographers were mainly focused on the mathematics of cryptology: their gaze had not turned much toward the “financial” aspects.

-

This has of course changed in the past decade. Tens of thousands of people, at least, have flocked into bitcoin, blockchain, with major conferences nearly every week. Probably most people are interested in the “Bitcoin Era,” starting roughly around 2008-2010, but with some important history leading up to it.

-

History is a natural way people understand things… it tells a story, a linear narrative.

-

About the future I won’t speculate much. I was vocal about some “obvious” consequences from 1988 to 1998, starting with “The Crypto Anarchist Manifesto” in 1988 and the Cypherpunks group and list starting in 1992.

-

CoinDesk: It sounds like you don’t think that bitcoin is particularly living up to its ethos, or that the community around it hasn’t really stuck to its cypherpunk roots.

-

Tim: Yes, I think the greed and hype and nattering about “to the Moon!” and “HODL” is the biggest hype wagon I’ve ever seen.

-

Not so much in the “Dutch Tulip” sense of enormous price increases, but in the sense of hundred of companies, thousands of participants, and the breathless reporting. And the hero worship. This is much more hype than we saw during the dot-com era. I think far too much publicity is being given to talks at conferences, white papers and press releases. A whole lot of “selling” is going on.

-

People and companies are trying to stake-out claims. Some are even filing for dozens or hundreds of patents in fairly-obvious variants of the basic ideas, even for topics that were extensively-discussed in the 1990s. Let’s hope the patent system dismisses some of these (though probably only when the juggernauts enter the legal fray).

-

The tension between privacy (or anonymity) and “know your customer” approaches is a core issue. It’s “decentralized, anarchic and peer-to-peer” versus “centralized, permissioned and back door.” Understand that the vision of many in the privacy community — cypherpunks, Satoshi, other pioneers — was explicitly of a permission-less, peer-to-peer system for money transfers. Some had visions of a replacement for “fiat” currency.

-

David Chaum, a principal pioneer, was very forward-thinking on issues of “buyer anonymity.” Where, for example, a large store could receive payments for goods without knowing the identity of a buyer. (Which is most definitely not the case today, where stores like Walmart and Costco and everybody else compiled detailed records on what customers buy. And where police investigators can buy the records or access them via subpoenas. And in more nefarious ways in some countries.)

-

Remember, there are many reasons a buyer does not wish to disclose buying preferences. But buyers and sellers BOTH need protections against tracking: a seller of birth control information is probably even more at risk than some mere buyer of such information (in many countries). Then there’s blasphemy, sacrilege and political activism. Approaches like Digicash which concentrated on *buyer* anonymity (as with shoppers at a store or drivers on a toll-road), but were missing a key ingredient: that most people are hunted-down for their speech or their politics on the *seller* side.

-

Fortunately, buyers and sellers are essentially isomorphic, just with some changes in a few arrow directions (“first-class objects”).

-

What Satoshi did essentially was to solve the “buyer”/”seller” track-ability tension by providing both buyer AND seller untraceability. Not perfectly, it appears. Which is why so much activity continues.

-

CoinDesk: So, you’re saying bitcoin and crypto innovators need to fight the powers that be, essentially, not align with them to achieve true innovation?

-

Tim: Yes, there is not much of interest to many of us if cryptocurrencies just become Yet Another PayPal, just another bank transfer system. What’s exciting is the bypassing of gatekeepers, of exorbitant fee collectors, of middlemen who decide whether Wikileaks — to pick a timely example — can have donations reach it. And to allow people to send money abroad.

-

Attempts to be “regulatory-friendly” will likely kill the main uses for cryptocurrencies, which are NOT just “another form of PayPal or Visa.”

-

More general uses of “blockchain” technology are another kettle of fish. Many uses may be compliance-friendly. Of course, a lot of the proposed uses — like putting supply chain records — on various public or private blockchains are not very interesting. Many point that these “distributed ledgers” are not even new inventions, just variants of databases with backups. As well, the idea that corporations want public visibility into contracts, materials purchases, shipping dates, and so on, is naive.

-

Remember, the excitement about bitcoin was mostly about bypassing controls, to enable exotic new uses like Silk Road. It was some cool and edgy stuff, not just another PayPal.

-

CoinDesk: So, you’re saying that we should think outside the box, try to think about ways to apply the technology in novel ways, not just remake what we know?

-

Tim: People should do what interests them. This was how most of the innovative stuff like BitTorrent, mix-nets, bitcoin, etc. happened. So, I’m not sure that “try to think about ways” is the best way to put it. My hunch is that ideologically-driven people will do what is interesting. Corporate people will probably not do well in “thinking about ways.”

-

Money is speech. Checks, IOUs, delivery contracts, Hawallah banks, all are used as forms of money. Nick Szabo has pointed out that bitcoin and some other cryptocurrencies have most if not all of the features of gold except it also has more features: it weighs nothing, it’s difficult to steal or seize and it can be sent over the crudest of wires. And in minutes, not on long cargo flights as when gold bars are moved from place to another.

-

But, nothing is sacred about either banknotes, coins or even official-looking checks. These are “centralized” systems dependent on “trusted third parties” like banks or nation-states to make some legal or royal guaranty.

-

Sending bitcoin, in contrast, is equivalent to “saying” a number (math is more complicated than this, but this is the general idea). To ban saying a number is equivalent to a ban on some speech. That doesn’t mean the tech can’t be stopped. There was the “printing out PGP code,” or the Cody Wilson, Defense Distributed case, where a circuit court ruled this way,

-

Printed words are very seldom outside the scope of the First Amendment.

-

CoinDesk: Isn’t this a good example of where you, arguably, want some censorship (the ability to force laws), if we’re going to rebuild the whole economy, or even partial economies, on top of this stuff?

-

Tim: There will inevitably be some contact with the legal systems of the U.S., or the rest of the world. Slogans like “the code is the law” are mainly aspirational, not actually true.

-

Bitcoin, qua bitcoin, is mostly independent of law. Payments are, by the nature of bitcoin, independent of charge-backs, “I want to cancel that transaction,” and other legal issues. This may change. But in the current scheme, it’s generally not know who the parties are, which jurisdictions the parties live in, even which laws apply.

-

This said, I think nearly all new technologies have had uses some would not like. Gutenberg’s printing press was certainly not liked by the Catholic Church. Examples abound. But does this mean printing presses should be licensed or regulated?

-

There have usually been some unsavory or worse uses of new technologies (what’s unsavory to, say, the U.S.S.R. may not be unsavory to Americans). Birth control information was banned in Ireland, Saudi Arabia, etc. Examples abound: weapons, fire, printing press, telephones, copier machines, computers, tape recorders.

-

CoinDesk: Is there a blockchain or cryptocurrency that’s doing it right? Is bitcoin, in your opinion, getting its own vision right?

-

Tim: As I said, bitcoin is basically doing what it was planned to do. Money can be transferred, saved (as bitcoin), even used as a speculative vehicle. The same cannot be said for dozens of major variants and hundreds of minor variants where a clear-cut, understandable “use case” is difficult to find.

-

Talk of “reputation tokens,” “attention tokens,” “charitable giving tokens,” these all seem way premature to me. And none have taken off the way bitcoin did. Even ethereum, a majorly different approach, has yet to see interest uses (at least that I have seen, and I admit I don’t the time or will to spend hours every day following the Reddit and Twitter comments.)

-

“Blockchain,” now its own rapidly-developing industry, is proceeding on several paths: private blockchains, bank-controlled blockchains, pubic blockchains, even using the bitcoin blockchain itself. Some uses may turn out to be useful, but some appear to be speculative, toy-like. Really, marriage proposals on the blockchain?

-

The sheer number of small companies, large consortiums, alternative cryptocurrencies, initial coin offerings (ICOs), conferences, expos, forks, new protocols, is causing great confusion and yet there are new conferences nearly every week.

-

People jetting from Tokyo to Kiev to Cancun for the latest 3-5 days rolling party. The smallest only attract hundreds of fanboys, the largest apparently have drawn crowds of 8,000. You can contrast that with the straightforward roll-out of credit cards, or even the relatively clean roll-out of bitcoin. People cannot spend mental energy reading technical papers, following the weekly announcements, the contentious debates. The mental transaction costs are too high, for too little.

-

The people I hear about who are reportedly transferring “interesting” amounts of money are using basic forms of bitcoin or bitcoin cash, not exotics new things like Lightning, Avalanche, or the 30 to 100 other things.

-

CoinDesk: It sounds like you’re optimistic about the value transfer use case for cryptocurrencies, at least then.

-

Tim: Well, it will be a tragic error if the race to develop (and profit from) the things that are confusingly called “cryptocurrencies” end up developing dossiers or surveillance societies such as the world has never seen. I’m just saying there’s a danger.

-

With “know your customer” regulations, crypto monetary transfers won’t be like what we have now with ordinary cash transactions, or even with wire transfers, checks, etc. Things will be _worse_ than what we have now if a system of “is-a-person” credentialing and “know your customer” governance is ever established. Some countries already want this to happen.

-

The “Internet driver’s license” is something we need to fight against.

-

CoinDesk: That’s possible, but you could make a similar claim about the internet today isn’t exactly the same as the original idea, yet it’s still be useful in driving human progress.

-

Tim: I’m just saying we could end up with a regulation of money and transfers that is much the same as regulating speech. Is this a reach? If Alice can be forbidden from saying “I will gladly pay you a dollar next week for a cheeseburger today,” is this not a speech restriction? “Know your customer” could just as easily be applied to books and publishing: “Know your reader.” Gaaack!

-

I’m saying there are two paths: freedom vs. permissioned and centralized systems.

-

This fork in the road in the road was widely discussed some 25 years ago. Government and law enforcement types didn’t even really disagree: they saw the fork approaching. Today, we have tracking, the wide use of scanners (at elevators, chokepoints), tools for encryption, cash, privacy, tools for tracking, scanning, forced decryption, backdoors, escrow.

-

In a age where a person’s smartphone or computer may carry gigabytes of photos, correspondence, business information – much more than an entire house carried back when the Bill of Rights was written – the casual interception of phones and computers is worrisome. A lot of countries are even worse than the U.S. New tools to secure data are needed, and lawmakers need to be educated.

-

Corporations are showing signs of corporatizing the blockchain: there are several large consortiums, even cartels who want “regulatory compliance.”

-

It is tempting for some to think that legal protections and judicial supervision will stop excesses… at least in the US and some other countries. Yet, we know that even the US has engaged in draconian behavior (purges of Mormons, killings and death marches for Native Americans, lynchings, illegal imprisonment of those of suspected Japanese ancestry).

-

What will China and Iran do with the powerful “know your writers” (to extend “know your customer” in the inevitable way)?

-

CoinDesk: Are we even talking about technology anymore though? Isn’t this just power and the balance of power. Isn’t there good that has come from the internet even if it’s become more centralized?

-

Tim: Of course, there’s been much good coming out of the Internet tsunami.

-

But, China already uses massive databases – with the aid of search engine companies – to compile “citizen trustworthiness” ratings that can be used to deny access to banking, hotels, travel. Social media corporate giants are eagerly moving to help build the machinery of the Dossier Society (they claim otherwise, but their actions speak for themselves).

-

Not to sound like a Leftist ranting about Big Brother, but any civil libertarian or actual libertarian has reason to be afraid. In fact, many authors decades ago predicted this dossier society, and the tools have jumped in quantum leaps since then

-

In thermodynamics, and in mechanical systems, with moving parts, there are “degrees of freedom.” A piston can move up or down, a rotor can turn, etc. I believe social systems and economies can be characterized in similar ways. Some things increase degrees of freedom, some things “lock it down.”

-

CoinDesk: Have you thought about writing something definitive on the current crypto times, sort of a new spin on your old works?

-

Tim: No, not really. I spent a lot of time in the 1992-95 period writing for many hours a day. I don’t have it in me to do this again. That a real book did not come out of this is mildly regrettable, but I’m stoical about it.

-

CoinDesk: Let’s step back and look at your history. Knowing what you know about the early cypherpunk days, do you see any analogies to what’s happening in crypto now?

-

Tim: About 30 years ago, I got interested in the implications of strong cryptography. Not so much about the “sending secret messages” part, but the implications for money, bypassing borders, letting people transact without government control, voluntary associations.

-

I came to call it “crypto anarchy” and in 1988 I wrote “The Crypto Anarchist Manifesto,” loosely-based in form on another famous manifesto. And based on “anarcho-capitalism,” a well-known variant of anarchism. (Nothing to do with Russian anarchists or syndicalists, just free trade and voluntary transactions.)

-

At the time, there was one main conference – Crypto – and two less-popular conferences – EuroCrypt and AsiaCrypt. The academic conferences had few if any papers on any links to economics and institutions (politics, if you will). Some game theory-related papers were very important, like the mind-blowing “Zero Knowledge Interactive Proof Systems” work of Micali, Goldwasser and Rackoff.

-

I explored the ideas for several years. In my retirement from Intel in 1986 (thank you, 100-fold increase in the stock price!), I spent many hours a day reading crypto papers, thinking about new structures that were about to become possible.

-

Things like data havens in cyberspace, new financial institutions, timed-release crypto, digital dead drops through steganography, and, of course, digital money.

-

Around that time, I met Eric Hughes and he visited my place near Santa Cruz. We hatched a plan to call together some of the brightest people we knew to talk about this stuff. We met in his newly-rented house in the Oakland Hills in the late summer of 1992.

-

CoinDesk: You mentioned implications for money… Were there any inclinations then that something like bitcoin or cryptocurrency would come along?

-

Tim: Ironically, at that first meeting, I passed out some Monopoly money I bought at a toy store. (I say ironically because years later, when bitcoin was first being exchanged in around 2009-2011 it looked like play money to most people – cue the pizza story!)

-

I apportioned it out and we used it to simulate what a world of strong crypto, with data havens and black markets and remailers (Chaum’s “mixes”) might look like. Systems like what later became “Silk Road” were a hoot. (More than one journalist has asked me why I did not widely-distribute my “BlackNet” proof of concept. My answer is generally “Because I didn’t want to be arrested and imprisoned.” Proposing ideas and writing is protected speech, at least in the U.S. at present.)

-

We started to meet monthly, if not more often at times, and a mailing list rapidly formed. John Gilmore and Hugh Daniel hosted the mailing list. There was no moderation, no screening, no “censorship” (in the loose sense, not referring to government censorship, of which of course there was none.) The “no moderation” policy went along with “no leaders.”

-

While a handful of maybe 20 people wrote 80 percent of the essays and messages, there was no real structure. (We also thought this would provide better protection against government prosecution).

-

And of course this fits with a polycentric, distributed, permission-less, peer to peer structure. A form of anarchy, in the “an arch,” or “no top” true meaning of the word anarchy. This had been previously explored by David Friedman, in his influential mid-70s book “The Machinery of Freedom.” And by Bruce Benson, in “The Enterprise of Law.

-

He studied the role of legal systems absent some ruling top authority. And of course anarchy is the default and preferred mode of most people—to choose what they eat, who they associate with, what the read and watch. And whenever some government or tyrant tries to restrict their choices they often finds way to route around the restrictions: birth control, underground literature, illegal radio reception, copied cassette tapes, thumb drives ….

-

This probably influenced the form of bitcoin that Satoshi Nakamoto later formulated.

-

CoinDesk: What was your first reaction to Satoshi’s messages, do you remember how you felt about the ideas?

-

Tim: I was actually doing some other things and wasn’t following the debates. My friend Nick Szabo mentioned some of the topics in around 2006-2008. And like a lot of people I think my reaction to hearing about the Satoshi white paper and then the earliest “toy” transactions was only mild interest. It just didn’t seem likely to become as big as it did.

-

He/she/they debated aspects of how a digital currency might work, what it needed to make it interesting. Then, in 2008, Satoshi Nakamoto released “their” white paper. A lot of debate ensued, but also a lot of skepticism.

-

In early 2009 an alpha release of “bitcoin” appeared. Hal Finney had the first bitcoin transaction with Satoshi. A few others. Satoshi himself (themselves?) even said that bitcoin would likely either go to zero in value or to a “lot.” I think many were either not following it or expected it would go to zero, just another bit of wreckage on the Information Superhighway.

-

The infamous pizza purchase shows that most thought of it as basically toy money.

-

CoinDesk: Do you still think it’s toy money? Or has the slowly increasing value sort of put that argument to rest, in your mind?

-

Tim: No, it’s no longer just toy money. Hasn’t been for the past several years. But it’s also not yet a replacement for money, for folding money. For bank transfers, for Hawallah banks, sure. It’s functioning as a money transfer system, and for black markets and the like.]

-

I’ve never seen such hype, such mania. Not even during the dot.com bubble, the era of Pets.com and people talking about how much money they made by buying stocks in “JDS Uniphase.” (After the bubble burst, the joke around Silicon Valley was “What’s this new start-up called “Space Available”?” Empty buildings all around.)

-

I still think cryptocurrency is too complicated…coins, forks, sharding, off-chain networks, DAGs, proof-of-work vs. proof-of-stake, the average person cannot plausibly follow all of this. What use cases, really? There’s talk about the eventual replacement of the banking system, or credit cards, PayPal, etc. is nice, but what does it do NOW?

-

The most compelling cases I hear about are when someone transfers money to a party that has been blocked by PayPal, Visa (etc), or banks and wire transfers. The rest is hype, evangelizing, HODL, get-rich lambo garbage.

-

CoinDesk: So, you see that as bad. You don’t buy the argument that that’s how things get built though, over time, somewhat sloppily…

-

Tim: Things sometimes get built in sloppy ways. Planes crash, dams fail, engineers learn. But there are many glaring flaws in the whole ecology. Programming errors, conceptual errors, poor security methods. Hundreds of millions of dollars have been lost, stolen, locked in time-vault errors.

-

If banks were to lose this kind of my money in “Oops. My bad!” situations there’d be bloody screams. When safes were broken into, the manufacturers studied the faults — what we now call “the attack surface” — and changes were made. It’s not just that customers — the banks — were encouraged to upgrade, it’s that their insurance rates were lower with newer safes. We desperately need something like this with cryptocurrencies and exchanges.

-

Universities can’t train even basic “cryptocurrency engineers” fast enough, let alone researchers. Cryptocurrency requires a lot of unusual areas: game theory, probability theory, finance, programming.

-

Any child understands what a coin like a quarter “does,” He sees others using quarters and dollar bills and the way it works is clear.

-

When I got my first credit card I did not spend a lot of time reading manuals, let alone downloading wallets, cold storage tools or keeping myself current on the protocols. “It just worked, and money didn’t just vanish.

-

CoinDesk: It sounds like you don’t like how innovation and speculation have become intertwined in the industry…

-

Tim: Innovation is fine. I saw a lot of it in the chip industry. But we didn’t have conferences EVERY WEEK! And we didn’t announce new products that had only the sketchiest ideas about. And we didn’t form new companies with such abandon. And we didn’t fund by “floating an ICO” and raising $100 million from what are, bluntly put, naive speculators who hope to catch the next bitcoin.

-

Amongst my friends, some of whom work at cryptocurrency companies and exchanges, the main interest seems to be in the speculative stuff. Which is why they often keep their cryptocurrency at the exchanges: for rapid trading, shorting, hedging, but NOT for buying stuff or transferring assets outside of the normal channels.

-

CoinDesk: Yet, you seem pretty knowledgeable on the whole about the subject area… Sounds like you might have a specific idea of what it “should” be.

-

Tim: I probably spend way too much time following the Reddit and Twitter threads (I don’t have an actual Twitter account).

-

What “should” it be? As the saying goes, the street will find its own uses for technology. For a while, Silk Road and its variants drove wide use. Recently, it’s been HODLing, aka speculating. I hear that online gambling is one of the main uses of ethereum. Let the fools blow their money.

-

Is the fluff and hype worth it? Will cryptocurrency change the world? Probably. The future is no doubt online, electronic, paperless.

-

But bottom line, there’s way too much hype, way too much publicity and not very many people who understand the ideas. It’s almost as if people realize there’s a whole world out there and thousands start building boats in their backyards.

-

Some will make, but most will either stop building their boats or will sink at sea.

-

We were once big on manifestos, These were ways not of enforcing compliance, but of suggesting ways to proceed. A bit like advising a cat… one does not command a cat, one merely suggests ideas, which sometimes they go with.

-

Final Thoughts:

-
    -
  • Don’t use something just because it sounds cool…only use it if actually solves some problem (To date, cryptocurrency solves problems for few people, at least in the First World).
  • -
  • Most things we think of as problems are not solvable with crypto or any other such technology (crap like “better donation systems” are not something most people are interested in).
  • -
  • If one is involved in dangerous transactions – drugs, birth control information – practice intensive “operational security”….look at how Ross Ulbricht was caught.
  • -
  • Mathematics is not the law
  • -
  • Crypto remains very far from being usable by average people (even technical people)
  • -
  • Be interested in liberty and the freedom to transact and speak to get back to the original motivations. Don’t spend time trying to make government-friendly financial alternatives.
  • -
  • Remember, there are a lot tyrants out there.
  • -
- -

These documents are - licensed under the Creative Commons Attribution-Share Alike 3.0 License

- - diff --git a/docs/true_names_and_TCP.html b/docs/true_names_and_TCP.html deleted file mode 100644 index 472ed0b..0000000 --- a/docs/true_names_and_TCP.html +++ /dev/null @@ -1,75 +0,0 @@ - - - - - - - - True Names and TCP - - -

True Names and TCP -

- -Vernor Vinge made -the point that true names are an instrument of -government oppression. If the government can associate -your true name with your actions, it can punish you for -those actions. If it can find the true names associated -with a transaction, it is a lot easier to tax that -transaction.

- -Recently there have been moves to make your cell phone -into a wallet. A big problem with this is that cell -phone cryptography is broken. Another problem is that -cell phones are not necessarily associated with true names, and as soon as the government hears that they might control money, it starts insisting that cell phones are associated with true names. The phone companies don’t like this, for if money is transferred from true name to true name, rather than cell phone to cell phone, it will make them a servant of the banking cartel, and the bankers will suck up all the gravy, but once people start stealing money through flaws in the encryption, they will be depressingly grateful that the government can track account holders down and punish them – except, of course, the government probably will not be much good at doing so.

- -TCP is all about creating connections.  It creates connections between network addresses, but network adresses correspond to the way networks are organized, not the way people are organized, so on top of networks we have domain names. 

- -TCP therefore establishes a connection to a domain name rather than a mere network address – but there is no concept of the connection coming from anywhere humanly meaningfull. 

- -Urns are “uniform resource names”, and uris are “uniform resource identifiers” and urls are “uniform resource locators”, and that is what the web is built out of. 

- -There are several big problems with urls:

  1. - -They are uniform: Everyone is supposed to agree on one domain name for one entity, but of course they don’t.  There is honest and reasonable disagreement as to which jim is the “real” jim, becaŭse in truth there is no one real jim, and there is fraŭd, as in lots of people pretending to be Paypal or the Bank of America, in order to steal your money.

  2. - -They are resources: Each refers to only a single interaction, but of course relationships are built out of many interactions.  There is no concept of a connection continuing throughout many pages, no concept of logon.  In building urls on top of TCP, we lost the concept of a connection.  And becaŭse urls are built out of TCP there is no concept of the content depending on both ends of the connection – that a page at the Bank might be different for Bob than it is for Carol – that it does in reality depend on who is connected is a kluge that breaks the architecture. 

    - -Becaŭse security (ssl, https) is constructed below the level of a connection, becaŭse it lacks a concept of connection extending beyond a single page or a single url, a multitude of insecurities result. We want https and ssl to secure a connection, but https and ssl do not know there are such things as logons and connections.

- -That domain names and hence urls presuppose agreement, agreement which can never exist, we get cybersquatting and phishing and suchlike. 

- -That connections and logons exist, but are not explicitly addressed by the protocol leads to such attacks as cross site scripting and session fixation. 

- -A proposed fix for this problem is yurls, which apply Zooko’s triangle to the web: One adds to the domain name a hash of a rule for validating the public key, making it into Zooko’s globally unique identifier.  The nickname (non unique global identifier) is the web page title, and the petname (locally unique identifier) is the title under which it appears in your bookmark list, or the link text under which it appears in a web page. 

- -This, however, breaks normal form.  The public key is an attribute of the domain, while the nickname and petnames are attributes of particular web pages – a breach of normal form related to the loss of the concept of connection – a breach of normal form reflecting the fact that that urls provide no concept of a logon, a connection, or a user.  

- -OK, so much for “uniform”.  Instead of uniform identifiers, we should have zooko identifiers, and zooko identifiers organized in normal form.  But what about “resource”, for “resource” also breaks normal form. 

- -Instead of “resources”, we should have “capabilities”.  A resource corresponds to a special case of a capability, a resource is a capability that that resembles a read only file handle. But what exactly are “capabilities”? 

- -People with different concepts about what is best for computer security tend to disagree passionately and at considerable length about what the word “capability” means, and will undoubtedly tell me I am a complete moron for using it in the manner that I intend to use it, but barging ahead anyway: 

- -A “capability” is an object that represents one end of a communication channel, or information that enables an entity to obtain such a channel, or the user interface representation of such a channel, or such a potential channel. The channel enables to possessor of the capability to do stuff to something, or get something.  Capabilities are usually obtained by being passed along the communication channel. Capabilities are usually obtained from capabilities, or inherited by a running instance of a program when the program is created, or read from storage after originally being obtained by means of another capability. 

- -This definition leaves out the issue of security – to provide security, capabilities need to be unforgeable or difficult to guess.  Capabilities are usually defined with the security characteristics central to them, but I am defining capabilities so that what is central is connections and managing lots of potential connection.  Sometimes security and limiting access is a very important part of management, and sometimes it is not.

- -A file handle could be an example of a capability – it is a communication channel between a process and the file management system.  Suppose we are focussing on security and access managemnet to files: A file handle could be used to control and manage permissions if a program that has the privilege to access certain files could pass an unforgeable file handle to one of those files to a program that lacks such access, and this is the only way the less privileged program could get at those files. 

- -Often the server wants to make sure that the client at one end of a connection is the user it thinks it is, which fits exactly into the usual definitions of capabilities.  But more often, the server does not care who the client is, but the client wants to make sure that the server at the other end of the connection is the server he thinks it is, which, since it is the client that initiates the connection, does not fit well into many existing definitions of security by capabilities. -

-

These documents are -licensed under the Creative -Commons Attribution-Share Alike 3.0 License

- diff --git a/docs/white_paper.md b/docs/white_paper.md index ac6826f..3782dfd 100644 --- a/docs/white_paper.md +++ b/docs/white_paper.md @@ -265,7 +265,7 @@ So general Malloc might send general Bob the the message: > facing overwhelming enemy attack, falling back. You and general Dave may soon be cut off. -and general Dave the message: +and general Dave the message: > enemy collapsing. In pursuit. @@ -292,7 +292,7 @@ yields an advantage of least two to one in getting one’s way. This is a Byzantine fault. And if people get away with it, pretty soon no one is following process, and the capacity to act as one collapses. Thus process becomes bureaucracy. Hence today’s American State Department -and defense policy. Big corporations die of this, though states take longer +and defence policy. Big corporations die of this, though states take longer to die, and their deaths are messier. It is a big problem, and people, not just computer programs, fail to solve it all the time.