1
0
forked from cheng/wallet
wallet/docs/Massive_Parallelism.html
2022-02-18 13:33:27 +10:00

41 lines
2.8 KiB
HTML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<style>
body {
max-width: 30em;
margin-left: 2em;
}
p.center {text-align:center;}
</style>
<link rel="shortcut icon" href="../rho.ico">
<title>Massive Parallelism</title>
</head>
<body><p>
Digital Ocean, Docker, microservices, Rest, Json and protocol buffers.</p><p>
The world is drifting towards handling massive parallelism through https microservices.</p><p>
Typically you have an nginx reverse proxy distributing https requests to a swarm of docker instances of node.js</p><p>
These communicate by rest, which means that http get and post map to wrapped database operations. On the wire the data is represented as JSON, protocol buffers, or ASN.1.</p><p>
JSON being by far the most popular, despite its inefficiency. It is a strictly text format, that is in principle human readable, though YAML is JSON variant that is far more human readable.</p><p>
Numerous docker instances keep in agreement through the Mongo database, which handles the syncrhonization of massive parallelism, possibly through a sharded cluster. Mongo communicates in binary, but everyone wants to talk to it in JSON, so it has a sort of bastard binary JSON, and can also talk real JSON.</p><p>
The great use of Mongo is coordinating numerous instances, which rapidly runs into scaling problems. Mongo is shardable, albeit sharding it is non trivial.</p><p>
Each instance of these massively parallel applications are contained in docker containers, which are themselves contained in VMs. A docker container is a sort of lightweight VM, which always represents a live, fully configured, running machine, which provides various services over the network, and any permanent effects are stored on a database that is apt to be external to the docker container, so that the virtual machine can be casually destroyed and restarted.</p><p>
To provide private keys to the docker container, have a volume that only one local user has access to, and mount it to the docker container. It then uses that key to get all the other secret keys, possibly by using crypt. But this seems kind of stupid. How about it generates its own unique private key, and then gets that key blessed in the process of accepting the authority of the blessing key.</p><p>
When launched, hits up a service to get its key blessed and register its availability, and thereafter accepts whatever commands are issued on the key chain issued when first it showed up.
</p><p style="background-color : #ccffcc; font-size:80%">These documents are
licensed under the <a rel="license" href="http://creativecommons.org/licenses/by-sa/3.0/">Creative
Commons Attribution-Share Alike 3.0 License</a></p>
</body>
</html>