In this day and age, a program that lives only on one machine, and a program without a gui (possibly a gui on another machine a thousand kilometers away) really is not much of a program
So, the minimum hello world program has to request the indentifier of another machine, creat a connection to that machine, and then display the response that machine.
And the minimum hello world program should preserve state from one invocation to the next, so should have an SQLite database.
And we also have to support unit test, which requires a conventional procedural single thread with conventional unix style output stream.
So the minimum hello world program is going to have a boost.asio multi thread event handler for IO events, io_service and boost::thread_group, and an event oriented GUI thread, which may well be running under boost::thread group. To communicate with the GUI, we use std::packaged_task, std::promise,
and std::future.
The GUI OnIdle event checks if the packaged task is ready for it to do, and executes it. Any thread that wants to talk to the GUI waits for any existing std::packaged_task
waiting on OnIdle to be completed, sets a new packaged task, and awaits the future to be returned, the future typically being a unique smart pointer containing a return struct, but may be merely an error code.
Create an io_service:
boost::asio::io_service io_service; // This object must outlive the threads that are // attached to it by the run command, thus the threads // must be created in an inner scopeCreate a work object work to stop its run() function from exiting if it has nothing else to do:
{ boost::asio::io_service::work work(io_service); // The work object prevents io_context from // returning from the run command when it has // consumed all available tasks. // Start some worker threads: // Probably should use C++14 threads, // but the example code uses boost threads. boost::thread_group threads; for (std::size_t i = 0; i < my_thread_count; ++i) threads.create_thread(boost::bind(&asio::io_service::run, &io_service)); // This executes the run() command in each thread. // Boost::bind is a currying function, that converts a function // with arguments into a function with no arguments. // A half assed substitute for lambda // Post the tasks to the io_service so they can be // performed by the worker threads: io_service.post(boost::bind(an_expensive_calculation, 42)); // Surely we should be able to replace bind with lambda. // io_service.post( // [](){ // an_expensive_calculation(42); // } // ); io_service.post(boost::bind(a_long_running_task, 123)); // Finally, before the program exits shut down the // io_service and wait for all threads to exit: ... Do unit test, wait for the ui thread to join io_service.stop(); // Stop will prevent new tasks from being run, even if they have been queued. // If, on the other hand, we just let the work object go out of scope, // and stop new tasks from being queued, // the threads will return // from the io_service object when it has run out of work } threads.join_all(); //Thread group is just a vector of threads Maybe should just use a vector, //so that I can address the gui thread separately. // Now we can allow io_service to go out of scope.So the starting thread of the program will run GUI unit tests through interthread communication to the OnIdle event of the GUI.
Alternatively we could have a separate daemo process that communicates with the GUI through SQLite, which has the advantage that someone else has already written a wxSQLite wrapper.
So we are going to need a program with a UI, and a program that is a true daemon. The daemon program will be network asynch, and the UI program will be UI asynch, but the UI program will get and send network jobs that are complete in very short time with the daemon, its networking operations will be procedural. This is acceptable because interprocess communication is high bandwidth and highly reliable
We are going to need to support chat capability, to combat seizure of names by the state, and to enable transactions to be firmly linked to agreements, and to enable people to transfer money without having to deal directly with cryptographic identities. The control and health interface to the daemon should be a chat capability on IPC. Which can be configured to only accept control from a certain wallet. Ideally one would like a general chatbot capability running python, but that is a project to be done after the minimum viable product release.
We will eventually need a chat capability that supports forms and automation, using the existing html facilities, but under chat protocol, not under https protocol. But if we allow generic web interactions, generic web interactions will block. One solution is a browser to chat interface. Your browser can launch a chat message in your wallet, and a received chat message may contain a link. The link launches a browser window, which contains a form, which generates a chat message. End user has to click to launch the form, fill it out, click on the browser to submit, then click again to send the chat message to submit from username1 to username2. End user may configure certain links to autofire, so that a local server and local browser can deal with messages encapsulated by the chat system.
We don’t want the SSL problem, where the same client continually negotiates new connections with the same host, resulting in many unnecessary round trips and gaping security holes. We want every connection between one identity on one machine, and another identity on another machine, to use the same shared secret for as long as both machines are up and talk to each other from time to time, so every identity on one machine talks to other identities through a single daemon on its own machine, and a single daemon on the other identity’s machine. If two identities on the same machine, talk though the one daemon. Each client machine talks IPC to the daemon.
There might be many processes, each engaging in procedural IPC with one daemon, but the daemon is asynch, and talks internet protocol with other daemons on other systems in the outside world. Processes need to talk through the daemon, because we only want to have one system global body of data concerning network congestion, bandwidth, transient network addresses, and shared secrets. System Ann in Paris might have many conversations running with System Bob in Shanghai, but we want one body of data concerning network address, shared secrets, cryptographic identity, flow control, and bandwidth, shared by each of these many conversations.
We assume conversations with the local daemon are IPC, hence fast, reliable, and high bandwidth. Failure of a conversation with the local daemon is a crash and debug. Failure of conversations across the internet is normal and frequent – there is no reliability layer outside your local system. Sending a request and not getting an answer is entirel normal and on the main path.
The one asynch daemon has the job of accumulating block chains which implement mutable Merkle-patricia dacs out of a series of immutable Merkle-patricia dacs. Sometimes part of the data from transient mutable Merkle-patricia trees is extracted and recorded in a more durable Merkle patricial trie. Every connection is a mutable Merkle-patricia dac, which is deleted (and some of the information preserved in another mutable Merkle-patricia tree) when the connection is shut down.
Every connection is itself a tree of connections, with flow control, bandwidth, crypto key, and network address information at the root connection, and every connection is the root of a mutable Merkle-patricia dac.
Each daemon is a host, so everyone runs a host on his own machine. That host might be behind a NAT, so to rendevous, would need a rendevous mediated by another host.
See also Generic Client Server Program, and Generic Test.
And in this day and age, we have a problem with the name system, in that the name system is subject to centralized state control, and the tcp-ssl system is screwed by the state, which is currently seizing crimethink domain names, and will eventually seize untraceable currency domain names.
So we not only need a decentralized system capable of generating consensus on who owns what cash, we need a system capable of generating consensus on who who owns which human readable globally unique names, and the mapping between human readable names, Zooko triangle names (which correspond to encryption public keys), and network addresses.
Both a gui and inter machine communication daemon imply asynch, an event oriented architecture, which we should probably base on boost asynch. Have to implement the boost signal handling for shutdown signals.
If, however, you are communicating between different machines, then the type has to be known at run time on both machines – both machines have to be working of the identical Cap'n Proto type declaration – but might, however, not know until run time which particular type declared in their identical Cap'n Proto declarations is arriving down the wire.
When setting up a communication channel at run time, have to verify the hash of the type declaration of the objects coming down the wire at run time, so that a connection cannot be set up, except both bodies of source code have the identical source for values on the wire on this particular connection.
Obviously in this day and age, a program isolated on one solitary machine is kind of useless, so, what is needed is a machine that has a host of connections, each connection being an object, and the one running thread activates each connection object when data arrives for it, and then the object requests more data, and the thread abandons the object for a while, to deal with the next object for which data has arrived. When an object issues a request that may take a while to fullfill, provides a block of code, a function literal for that event.
A language that has to handle Gui and a language that has to handle communication is event oriented. You continually declare handlers for events. Each handler is a particular method of a particular object, and new events of the same kind cannot be processed until the method returns. If handling the event might take a long time (disk access, user interface) the method declares new event handlers, and fires actions which will immediately or eventually lead to new events, and then immediately returns.
And when I say "immediately lead to new events", I mean "immediately after the method returns" – the method may add a new event handler and possibly also a new event to the event queue. Steal Boost asynch, if Cap'n Proto has not already stolen something like Boost Asynch.
Each event is a message, and each message is a Cap'n Proto buffer, whose precise type may not necessarily be known until it actually arrives, but which has to be handled by code capable of handling that message type.
This document is licensed under the CreativeCommons Attribution-Share Alike 3.0 License