1
0
forked from cheng/wallet
wallet/docs/libraries.md
reaction.la 40dc88a37b
shortening names
Preparatory to creating a proper link browse structure
2022-07-08 17:05:24 +10:00

85 KiB
Raw Blame History

title
Libraries

Boost

My experience with Boost is that it is no damned good: They have an over elaborate pile of stuff on top of the underlying abstractions, which pile has high runtime cost, and specializes the underlying stuff in ways that only work with boost example programs and are not easily generalized to do what one actually wishes done.

Their abstractions leak.

Boost high precision arithmetic gmp_int A messy pile built on top of GMP. Its primary benefit is that it makes gmp look like mpir Easier to use [MPIR] directly.

The major benefit of boost gmp is that it runs on some machines and operating systems that mpir does not, and is for the most part source code compatible with mpir.

A major difference is that boost gmp uses long integers, which are on sixty four bit windows int32_t, where mpir uses mpir_ui and mpir_si, which are on sixty four bit windows uint64_t and int64_t. This is apt to induce no end of major porting issues between operating systems.

Boost gmp code running on windows is apt to produce radically different results to the same boost gmp code running on linux. Long int is just not portable, and should never be used. This kind of issue is absolutely typical of boost.

In addition to the portability issue, it is also a typical example of boost abstractions denying you access to the full capability of the thing being abstracted away. It is silly to have a thirty two bit interface between sixty four bit hardware and unlimited arithmetic precision software.


Database

The blockchain is a massively distributed database built on top of a pile of single machine, single disk, databases communicating over the network. If you want a single machine, single disk, database, go with SQLite, which in WAL mode implements synch interaction on top of hidden asynch.

SQLite have their own way of doing things, that does not play nice with Github.

The efficient and simple way to handle interaction with the network is via callbacks rather than multithreading, but you usually need to handle databases, and under the hood, all databases are multithreaded and blocking. If they implement callbacks, it is usually on top of a multithreaded layer, and the abstraction is apt to leak, apt to result in unexpected blocking on a supposedly asynchronous callback.

SQLite recommends at most one thread that writes to the database, and preferably only one thread that interacts with the database.

The Invisible Internet Project (I2P)

Comes with an I2P webserver, and the full api for streaming stuff. These appear as local ports on your system. They are not tcp ports, but higher level protocols, and UDP. (Sort of UDP - obviously you have to create a durable tunnel, and one end is the server, the other the client.)

Inconveniently, written in java.

Internet Protocol

QUIC UDP with flow control and reliability. Intimately married to http/2, https/2, and google chrome. Cannot call as library, have to analyze code, extract their ideas, and rewrite. And, looking at their code, I think they have written their way into a blind alley.

But QUIC is http/2, and there is a gigantic ecosystem supporting http/2.

We really have no alternative but to somehow interface to that ecosystem.

QUIC is UDP with flow control, reliability, and SSL/TLS encryption, but no DDoS resistance, and total insecurity against CA attack.)

Boost Asynch

Boost implements event oriented multithreading in IO service, but dont like it because it fails to interface with Microsofts implementation of asynch internet protocol, WSAAsync, and WSAEvent. Also because brittle, incomprehensible, and their example programs do not easily generalize to anything other than that particular example.

To the extent that you need to interact with a database, you need to process connections from clients in many concurrent threads. Connection handlers are run in thread, that called io_service::run().

You can create a pool of threads processing connection handlers (and waiting for finalizing database connection), by running io_service::run() from multiple threads. See Boost.Asio docs.

Asynch Database access

MySQL 5.7 supports X Plugin / X Protocol, which allows asynchronous query execution and NoSQL But X devapi was created to support node.js and stuff. The basic idea is that you send text messages to mysql on a certain port, and asynchronously get text messages back, in google protobuffs, in php, JavaScript, or sql. No one has bothered to create a C++ wrapper for this, it being primarily designed for php or node.js

SQLite nominally has synchronous access, and the use of one read/write thread, many read threads is recommended. But under the hood, if you enable WAL mode, access is asynchronous. The nominal synchrony sometimes leaks into the underlying asynchrony.

By default, each INSERT is its own transaction, and transactions are excruciatingly slow. Wal normal mode fixes this. All writes are writes to the writeahead file, which gets cleaned up later.

The authors of SQLite recommend against multithreading writes, but we do not want the network waiting on the disk, nor the disk waiting on the network, therefore, one thread with asynch for the network, one purely synchronous thread for the SQLite database, and a few number crunching threads for encryption, decryption, and hashing. This implies shared nothing message passing between threads.


Facebook Folly libraryprovides many tools, with such documentation as exists amounting to “read the f*****g header files”. They are reputed to have the highest efficiency queuing for interthread communication, and it is plausible that they do, because facebook views efficiency as critical. Their [queuing header file] (https://github.com/facebook/folly/blob/master/folly/MPMCQueue.h) gives us MPMCQueue.

On the other hand, boost gives us a lockless interthread queue, which should be very efficient. Assuming each thread is an event handler, rather than pseudo synchronous, we queue up events in the boost queue, and handle all unhandled exceptions from the event handler before getting the next item from the queue. We keep enough threads going that we do not mind threads blocking sometimes. The queue owns objects not currently being handled by a particular thread. Objects are allocated in a particular thread, and freed in a particular thread, which process very likely blocks briefly. Graphic events are passed to the master thread by the wxWindows event code, but we use our own mutltithreaded event code to handle everything else. Posting an event to the gui code will block briefly.

I was looking at boosts queues and lockless mechanisms from the point of view of implementing my own thread pool, but this is kind of stupid, since boost already has a thread pool mechanism written to handle the asynch IO problem. Thread pools are likely overkill. Node.js does not need them, because its single thread does memory to memory operations.

Boost provides us with an io_service and boost::thread group, used to give effect to asynchronous IO with a thread pool. io_service was specifically written to perform io, but can be used for any thread pool activity whatsoever. You can “post” tasks to the io_service, which will get executed by one of the threads in the pool. Each such task has to be a functor.

Since supposedly nonblocking operations always leak and block, all we can do is try to have blocking minimal. For example nonblocking database operations always block. Thus our threadpool needs to be many times larger than our set of hardware threads, because we will always wind up doing blocking operations.

The C++11 multithreading model assumes you want to do some task in parallel, for example you are multiplying two enormous matrices, so you spawn a bunch of threads, then you wait for them all to complete using join, or all to deliver their payload using futures and promises. This does not seem all that useful, since the major practical issue is that you want your system to continue to be responsive while it is waiting for some external hardware to reply. When you are dealing with external events, rather than grinding a matrix in parallel, event oriented architecture, rather than futures, promises, and joins is what you need.

Futures, promises, and joins are useful in the rather artificial case that responding to an remote procedure call requires you to make two or more remote procedure calls, and wait for them to complete, so that you then have the data to respond to a remote procedure call.

Futures, promises, and joins are useful on a server that launches one thread per client, which is often a sensible way to do things, but does not fit that well to the request response pattern, where you dont have a great deal of client state hanging around, and you may well have ten thousand clients If you can be pretty sure you are only going to have a reasonably small number of clients at any one time, or and significant interaction between clients, one thread per client may well make a lot of sense.

I was planning to use boost asynch, but upon reading the boost user threads, sounds fragile, a great pile of complicated unintelligible code that does only one thing, and if you attempt to do something slightly different, everything falls apart, and you have to understand a lot of arcane details, and rewrite them.

Nanomsgis a socket library, that provides a layer on top of everything that makes everything look like sockets, and provides sockets specialized to various communication patterns, avoiding the roll your own problem. In the zeroMQ thread, people complained that a simple hello world TCP-IP program tended to be disturbingly large and complex Looks to me that [Nanomsg] wraps a lot of that complexity.

Sockets

A simple hello world TCP-IP program tends to be disturbingly large and complex, and windows TCP-IP is significantly different from posix TCP-IP.

Waiting on network events is deadly, because they can take arbitrarily large time, but multithreading always bites. People who succeed tend to go with single thread asynch, similar to, or part of, the window event handling loop.

Asynch code should take the form of calling a routine that returns immediately, but passing it a lambda callback, which gets executed in the most recently used thread.

Interthread communication bites you dont want several threads accessing one object, as synch will slow you down, so if you multithread, better to have a specialist thread for any one object, with lockless queues passing data between threads. One thread for all writes to SQLite, one thread for waiting on select.

Boost Asynch supposedly makes sockets all look alike, but I am frightened of their work guard stuff looks to me fragile and incomprehensible. Looks to me that no one understands boost asynch work guard, not even the man who wrote it. And they should not be using boost bind, which has been obsolete since lambdas have been available, indicating bitrot.

Because work guard is incomprehensible and subject to change, will just keep the boost io object busy with a polling timer.

And I am having trouble finding boost asynch documented as a sockets library. Maybe I am just looking in the wrong place.

A nice clean tutorial depicting strictly synchronous tcp.

Libpcap and Win10PCap provide very low level, OS independent, access to packets, OS independent because they are below the OS, rather than above it. Example code for visual studio.

Simple sequential procedural socket programming for windows sockets.

If I program from the base upwards, the bottom most level would be a single thread sitting on a select statement. Whenever the select fired, would execute a corresponding functor transfering data between userspace and system space.

One thread, and only one thread, responsible for timer events and transferring network data between userspace and systemspace.

If further work required in userspace that could take significant time (disk operations, database operations, cryptographic operations) that functor under that thread would stuff another functor into a waitless stack, and a bunch of threads would be waiting for that waitless stack to be signaled, and one of those other threads would execute that functor.

The reason we have a single userpace thread handling the select and transfers between userpace and systemspace is that that is a very fast and very common operation, and we dont want to have unnecessary thread switches, wherein one thread does something, then immediately afterwards another thread does almost the same thing. All quickie tasks should be handled sequentially by one thread that works a state machine of functors.

The way to do asynch is to wrap sockets in classes that reflect the intended use and function of the socket. Call each instance of such a class a connection. Each connection has its own state machine state and its own message dispatcher, event handler, event pump, message pump.

A single thread calls select and poll, and drives all connection instances in all transfers of data between userspace and systemspace. Connections also have access to a thread pool for doing operations (such as file, database and cryptography, that may involve waits.

The hello world program for this system is to create a derived server class that does a trivial transformation on input, and has a path in server name space, and a client class that sends a trivial input, and displays the result.

Microsoft WSAAsync[Socketprocedure] is a family of socket procedures designed to operate with, and be driven by, the Window ui system, wherein sockets are linked to windows, and driven by the windows message loop. Could benefit considerably by being wrapped in connection classes.

I am guessing that wxWidgets has a similar system for driving sockets, wherein a wxSocket is plugged in to the wxWidget message loop. On windows, wxWidget wraps WSASelect, which is the behavior we need.

Microsoft has written the asynch sockets you need, and wxWidgets has wrapped them in an OS independent fashion.

WSAAsyncSelect

WSAEventSelect

select

Using wxSockets commits us to having a single thread managing everything. To get around the power limit inherent in that, have multiple peers under multiple names accessing the same database, and have a temporary and permanent redirect facility so that if you access peername, your connection, and possibly your link, get rewritten to p2.peername by peers trying to balance load.

Microsoft tells us:

receiving, applications use the WSARecv or WSARecvFrom functions to supply buffers into which data is to be received. If one or more buffers are posted prior to the time when data has been received by the network, that data could be placed in the users buffers immediately as it arrives. Thus, it can avoid the copy operation that would otherwise occur at the time the recv or recvfrom function is invoked.

Moral is, we should use the sockets that wrap WSA.

Tcl

Tcl is a really great language, and I wish it would become the language of my new web, as JavaScript is the language of the existing web.

When I search for Tcl, I am apt to find a long out of date repository preserved for historical reasons, but there is an active repository obscured by the existence of the out of date repository.

Javascript is a great language, and has a vast ecosystem of tools, but it is controlled from top to bottom by our enemies, and using it is inherently insecure.

It consists of a string (which is implemented under the hood as a copy on write rope, with some substrings of the rope actually being run time typed C++ types that can be serialized and deserialized to strings) and a name table, one name table per interpreter, and at least one interpreter per thread. The entries in the name table can be strings, C++ functions, or run time typed C++ types, which may or may not be serializable or deserializable, but conceptually, it is all one big string, and the name table is used to find C and C++ functions which interpret the string following the command. Execution consists of executing commands found in the string, which transform it into a new string, which in turn gets transformed into a new string, until it gets transformed into the final result. All code is metacode. If elements of the string need to be deserialized to and from a C++ run time type, (because the command does not expect that run time type) but cannot be, because there is no deserialization for that run time type, you get a run time error, but most of the time you get, under the hood, C++ code executing C++ types it is only conceptually a string being continually transformed into another string. The default integer is infinite precision, because integers are conceptually arbitrary length strings of numbers.

To sandbox third party code, including third party gui code, just restrict the nametable to have no dangerous commands, and to be unable to load c++ modules that could provide dangerous commands.

It is faster to bring up a UI in Tcl than in C. We get, for free, OS independence.

Tcl used to be the best level language for attaching C programs to, and for testing C programs, or it would be if SWIG actually worked. The various C components of Tcl provide an OS independent layer on top of both Linux and Windows, and it has the best multithread and asynch system.

It is also a metaprogramming language. Every Tcl program is a metaprogram you always write code that writes code.

The Gui is necessarily implemented as asynch, something like the JavaScript dom in html, but with explicit calls to the event/idle loop. Multithreading is implemented as multiple interpreters, at least one interpreter per thread, sending messages to each other.

Time

After spending far too much time on this issue, which is has sucked in far too many engineers and far too much thought, and generated far too many libraries, I found the solution was c++11 Chrono: For short durations, we use the steady time in milliseconds, where each machine has its own epoch, and no two machines have exactly the same milliseconds. For longer durations, we use the system time in seconds, where all machines are expected to be within a couple of seconds of each other. For the human readable system time in seconds to be displayed on a particular machine, we use the ISO format 20120114_15:39:34+10:00 (timezone with 10 hour offset equivalent to Greenwich time 20120114_05:39:34+00:00)

For long durations, we use signed system time in seconds, for short durations unsigned steady time in milliseconds.

Windows and Unix both use time in seconds, but accessed and manipulated in incompatible ways.

Boost has numerous different and not altogether compatible time libraries, all of them overly clever and all of them overly complicated.

wxWidgets has OS independent time based on milliseconds past the epoch which however fails to compress under Cap'n Proto.

I was favourably impressed by the approach to time taken in tcp packets, that the time had to be approximately linear, and in milliseconds or larger, but they were entirely relaxed about the two ends of a tcp connection using different clocks with different, and variable, speeds.

It turns out you can go a mighty long way without a global time, and to the extent that you do need a global time, should be equivalent to that used in email, which magically hides the leap seconds issue.

UTF8 strings

Are supported by the wxWidgets wxString, which provide support to and from wide character variants and locale variants. (We don't want locale variants, they are obsolete. The whole world is switching to UTF, but our software and operating environments lag)

wString::ToUTF8() and wString::FromUTF8() do what you would expect.

On visual studio, need to set your source files to have bom, so that Visual Studio knows that they are UTF8, need to set the compiler environment in Visual Studio to UTF8 with /Zc:__cplusplus /utf-8 %(AdditionalOptions)

And you need to set the run time environment of the program to UTF8 with a manifest.

You will need to place all UTF8 string literals and string constants in a resource file, which you will use for translated versions.

If you fail to set the compilation and run time environment to UTF8 then for extra confusion, your debugger and compiler will look as if they are handling UTF8 characters correctly as single byte characters, while at least wxString alerts you that something bad is happening by run time translating to the null string.

Automatic string conversion in wxWidgets is not UTF8, and if you have any unusual symbols in your string, you get a run time error and the empty string. So wxString automagic conversions will rape you in the ass at runtime, and for double the confusion, your correctly translated UTF8 strings will look like errors. Hence the need to make sure that the whole environment from source code to run time execution is consistently UTF8, which has to be separately ensured in three separate place.

When wxWidgets is compiled using #define wxUSE_UNICODE_UTF8 1, it provides UTF8 iterators and caches a character index, so that accessing a character by index near a recently used character is fast. The usual iterators wx.begin(), wx.end(), const and reverse iterators are available. I assume something bad happens if you advance a reverse iterator after writing to it.

wxWidgets compiled with #define wxUSE_UNICODE_UTF8 1 is the way of the future, but not the way of the present. Still a work in progress Does not build under Windows. Windows now provide UTF8 entries to all its system functions, which should make it easy.

wxWidgets provides wxRegEx which, because wxWidgets provides index by entity, should just work. Eventually. Maybe the next release.

UTF8-CPP

A powerful library for handling UTF8. This somewhat duplicates the facilities provided by wxWidgets with wxUSE_UNICODE_UTF8==1

For most purposes, wxString should suffice, when it actually works with UTF8. Which it does not yet on windows. We shall see. wxWidgets recommends not using wxString except to communicate with wxWidgets, and not using it as general UTF8 system. Which is certainly the current state of play with wxWidgets.

For regex to work correctly, probably need to do it on wxString's native UTF16 (windows) or UTF32 (unix), but it supposedly works on UTF8, assuming you can successfully compile it, which you cannot.

Cap'n Proto

Designed for a download from github and run cmake install. As all software should be.

But for mere serialization to of data to a form invariant between machine architectures and different compilers and different compilers on the same machine, overkill for our purposes. Too much capability.

Awesome C++

Awesome C++ A curated list of awesome C/C++ frameworks, libraries, resources, and shiny things

{target="_blank"}

I encountered this when looking at the Wt C++ Web framework, which seems to be mighty cool except I don't think I have any use for a web framework. But Awesome C++ has a very pile of things that I might use.

Wt has the interesting design principle that every open web page maps to a windows class, every widget on the web page, maps to a windows class, every row in the sql table maps to a windows class. Cool design.

Opaque password protocol

Opaque is PAKE done right.

"Lets talk about PAKE" {target="_blank"}

Server stores a per user salt, the users public key, and the user's secret key encrypted with a secret that only the user ever learns.

Secret is generated by the user from the salt and his password by interaction with the server without the the user learning the salt, nor the hash of the salt, nor the server the password or the hash of the password. User then strengthens the secret generated from salt and password applying a large work factor to it, and decrypts the private key with it. User and server then proceed with standard public key cryptography.

If the server is evil, or the bad guys seize the server, everything is still encrypted and they have to run, not a hundred million trial passwords against all users, but a hundred million passwords against each user. And user can make the process of trying a password far more costly and slow than just generating a hash. Opaque zero knowledge is designed to be as unfriendly as possible to big organizations harvesting data on an industrial scale. The essential design principle of this password protocol is that breaking a hundred million passwords by password guessing should be a hundred million times as costly as breaking one password by password guessing. The protocol is primarily designed to obstruct the NSA's mass harvesting.

It has the enormous advantage that if you have one strong password which you use for many accounts, one evil server cannot easily attack your accounts on other servers. To do that, it has to try every password - which runs into your password strengthening.